Stock Viewsonic Kernel is Multicore - G Tablet Android Development

Well, I just unpacked the G's stock /proc/config.gz kernel configuration and determined that the kernel is set for multicore.
Therefore, the rumor that the gtab is only using one core is wrong - at least from a kernel perspective. Does anyone have any information as to where this rumor got started?

Probably because all the benchmarks assume a single core, and thus people see results that don't make sense if you count 2 cores.
The major issue I have with the kernel right now is that everytime the G Tablet goes to sleep and wakes back up, it gets stuck at a ridiculously low clock speed and creates stuttering and slowness unless you use OverclockWidget to reset it. That's got to be a kernel-level issue.
That and the fact that the reading of the multitouch sensor is inherently stuttery and can't even read 2 touch points accurately for pinch and zoom some of the time (reminds me of the Nexus One in earlier software releases).

I think the general feeling is that the Android layer itself is not taking advantage of both cores. Or at least the apps themselves are not multi-threaded yet.
I'm thinking that this is similar to the days on early XP - that kernel was SMP aware OOTB, but if the applications weren't multthreaded, it didn't really offer much.

rcgabriel said:
The major issue I have with the kernel right now is that everytime the G Tablet goes to sleep and wakes back up, it gets stuck at a ridiculously low clock speed and creates stuttering and slowness unless you use OverclockWidget to reset it. That's got to be a kernel-level issue.
Click to expand...
Click to collapse
Interesting. The kernel's default governor is 'performance', so the kernel itself should be setting the clock to max. Maybe its not when it wakes up. Certainly something to look at....
roebeet said:
I think the general feeling is that the Android layer itself is not taking advantage of both cores. Or at least the apps themselves are not multi-threaded yet.
Click to expand...
Click to collapse
That may be, however, low-level stuff like flushing disk cache, handling wifi/bluetooth hardware events, etc., should be able to be pushed to the second core so that any processes on the first core wouldn't stutter. My experience has been that whenever wifi connects/disconnects, the whole tablet simply "freezes" until the wifi is fully connected. Sure is annoying in Angry Birds....
roebeet said:
I'm thinking that this is similar to the days on early XP - that kernel was SMP aware OOTB, but if the applications weren't multthreaded, it didn't really offer much.
Click to expand...
Click to collapse
Again, kernel processes should be able to be rescheduled on the second core, so, yes, it should offer "more". Even if some of the programs running are single-threaded, having two cores should allow the processor to run two processes at the same time (even kernel processes that I alluded to earlier - flushing caches, handling wifi/bluetooth events, etc.). We should not be seeing "stalling" or "stuttering" much if at all. The only thing I can think of is that kernel support for the Tegra 2 is still immature.

rcgabriel said:
The major issue I have with the kernel right now is that everytime the G Tablet goes to sleep and wakes back up, it gets stuck at a ridiculously low clock speed and creates stuttering and slowness unless you use OverclockWidget to reset it. That's got to be a kernel-level issue.
Click to expand...
Click to collapse
Interesting. The kernel's default governor is 'performance', so the kernel itself should be setting the clock to max. Maybe its not when it wakes up. Certainly something to look at....
roebeet said:
I think the general feeling is that the Android layer itself is not taking advantage of both cores. Or at least the apps themselves are not multi-threaded yet.
Click to expand...
Click to collapse
That may be, however, low-level stuff like flushing disk cache, handling wifi/bluetooth hardware events, etc., should be able to be pushed to the second core so that any processes on the first core wouldn't stutter. My experience has been that whenever wifi connects/disconnects, the whole tablet simply "freezes" until the wifi is fully connected. Sure is annoying in Angry Birds....
roebeet said:
I'm thinking that this is similar to the days on early XP - that kernel was SMP aware OOTB, but if the applications weren't multthreaded, it didn't really offer much.
Click to expand...
Click to collapse
Again, kernel processes should be able to be rescheduled on the second core, so, yes, it should offer "more". Even if some of the programs running are single-threaded, having two cores should allow the processor to run two processes at the same time (even kernel processes that I alluded to earlier - flushing caches, handling wifi/bluetooth events, etc.). We should not be seeing "stalling" or "stuttering" much if at all. One thing I can think of is that kernel support for the Tegra 2 is still immature. Another is that Viewsonic/Malata/Nvidia got something wrong in the hardware....

This could be incorrect, but:
http://www.taranfx.com/dual-core-tegra2-android-vs-iphone-4
and
http://www.androidcentral.com/nvidia-tells-us-why-we-want-dual-core-processors
I keep thinking (and hoping) that native support in the next version on Android might mean better overall performance. I guess we'll know in a few month's time.

roebeet said:
This could be incorrect, but:
http://www.taranfx.com/dual-core-tegra2-android-vs-iphone-4
and
http://www.androidcentral.com/nvidia-tells-us-why-we-want-dual-core-processors
I keep thinking (and hoping) that native support in the next version on Android might mean better overall performance. I guess we'll know in a few month's time.
Click to expand...
Click to collapse
You're not going to give up on g-tablet and move to something else anytime soon are you?

jfholijr said:
You're not going to give up on g-tablet and move to something else anytime soon are you?
Click to expand...
Click to collapse
Never say never.
In all seriousness, I'm pretty happy with my GTab. It does what I need it to do and probably the only thing that bugs me is the viewing angles, but not that much. The good outweighs the bad.
I am watching the Notion Ink news though, as that could solve the screen problems. But at $500, it might be out of my range - remember that I got my Gtab in the Outlet for $215.

roebeet said:
Never say never.
In all seriousness, I'm pretty happy with my GTab. It does what I need it to do and probably the only thing that bugs me is the viewing angles, but not that much. The good outweighs the bad.
I am watching the Notion Ink news though, as that could solve the screen problems. But at $500, it might be out of my range - remember that I got my Gtab in the Outlet for $215.
Click to expand...
Click to collapse
Don't leave us! haha

Aww I was one who thought this was true. I was hoping for a 5000 plus quadrant score haha. I'm satisfied with this tablet...for now. I'm definitely sticking with tegra devices. I won't get rid of this tablet until a tegra device with more RAM, a better screen, and maybe a faster processor.

i could just be wrong, but i was under the impression that froyo was to blame for not using multicore as it wasnt supported
and yet another reason why we were all waiting on gingerbread as it would utilize it

Well if this is true and it has multi core capabilities you would think new software should be on the way......

Yep, the stock/TnT kernel definitely uses both cores:
# cat /proc/cpuinfo
cat /proc/cpuinfo
Processor : ARMv7 Processor rev 0 (v7l)
processor : 0
BogoMIPS : 1998.84
processor : 1
BogoMIPS : 1992.29
Features : swp half thumb fastmult vfp edsp vfpv3 vfpv3d16
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x1
CPU part : 0xc09
CPU revision : 0
Hardware : NVIDIA Harmony Development System
Revision : 0000
Serial : 0000000000000000
#
The big question is, does froyo's dalvik VM support both or not. The OS natively is running utilizing both, but applications have to be optimized for multi-processor/core, particularly the Java virtual machine stuff in the proprietary Android stuff. Good question for one of the bigger dev's like Cyanogen; I'll see what I can find out. Either way, native support across the board for multi-core is definitely going to be a huge part of Android in the future, so I wouldn't be surprised if this was already written into GB (at least partially) and will be 100% by HC's release.

markgolly said:
Well, I just unpacked the G's stock /proc/config.gz kernel configuration and determined that the kernel is set for multicore.
Therefore, the rumor that the gtab is only using one core is wrong - at least from a kernel perspective. Does anyone have any information as to where this rumor got started?
Click to expand...
Click to collapse
Yes. You can also see this in the logfiles, e.g. kernel ring buffer via dmesg and see that it does bring up both cores which is further supported by a bogomips val that corresponds to 2 x 1GHz a9s.
As other have mentioned it is the layers ABOVE the kernel that don't seem to be fully threaded or enough to really take advantage of multiple CPUs... i.e. Android "OS" layer & apps...
Oddly enough even though I am keeping the gTab I still haven't really dug through /proc on this one much yet. It was the first or second thing that I did with the GT & WPDN along with examining the kernel ring buffer...

Related

aDOSBox and X-Com

I'm trying to figure out good settings for aDOSBox to run on my TF. X-Com 1 game is pretty sluggish, no matter what frameskip and cycles count I set. Sometimes it's better, sometimes it's bad, but I can't figure out optimal settings.
Maybe someone at least have correct cycles number for our Tegra 2, so emulation will run with the same speed as CPU?
Oh, and also getting mouse pointer to work IN the game, not ABOVE the game during emulation would be brilliant. Using touchscreen as touchpad is pretty awkward.
Hope someone've been toying with aDOSBox on TF as I did and have some recommendations =)
+1 from mehr for that.
I also look for a good setting for the TF for master of Orion 2.
adosbox is just slower than ANdosbox. That's just how it is, I was a long time user of adosbox and went over to andosbox because it had more features and more compatible.
I think, from personal experience, with both emulators you can't set the cycles to higher than 8000-10000. You can set it higher, but the tegra2 SoC won't let it go any higher than that. So games that need 15,000 or up will stutter or run slowly. I have this happening with Crusader: No Regret and No Remorse, and Doom1/2 as well.
Basically, with andosbox it'll run okay for anything that was supposed to be ran on a 386 (wolf3d, Raptor, OMF are some popular ones). Anything more than that is gonna slowdown.
Adosbox feels like it runs 1000-2000 cycles slower for my experience.
kaijura said:
adosbox is just slower than ANdosbox. That's just how it is, I was a long time user of adosbox and went over to andosbox because it had more features and more compatible.
I think, from personal experience, with both emulators you can't set the cycles to higher than 8000-10000. You can set it higher, but the tegra2 SoC won't let it go any higher than that. So games that need 15,000 or up will stutter or run slowly. I have this happening with Crusader: No Regret and No Remorse, and Doom1/2 as well.
Basically, with andosbox it'll run okay for anything that was supposed to be ran on a 386 (wolf3d, Raptor, OMF are some popular ones). Anything more than that is gonna slowdown.
Adosbox feels like it runs 1000-2000 cycles slower for my experience.
Click to expand...
Click to collapse
Holy crap! It IS much better! ANdosbox ftw, really. Worth every penny. And it has much better mouse emulation - I just use pen mode + tap-click when on base/geoscape, and touchpad style + volUP-volDOWN in combat. Some wrong clicks in combat happen, but not very often. Thanks for the tip!
I'm curious -- how stable / compatible do you guys find Andosbox?
Probably can't do anything about the sluggishness, it's probably just the limitations of the hardware you're working with.
knoxploration said:
I'm curious -- how stable / compatible do you guys find Andosbox?
Click to expand...
Click to collapse
Stable is probably 99% or near 100%. If Dosbox can run it, most likely anDosBox can run it, just my opinion.
The question is how intensive is the program. Like I mentioned earlier, ~8-10k seems to be the limit for the cpu cycles.
There are two parts of it. One part of this I think is due to the limitations of the SoC, it probably runs andosbox at 1.0ghz in single core. The other part is the code itself, as you can tell adosbox runs slower than andosbox. Something was improved in his version of the ported code, we don't know what unless we contact the andosbox dev. He's pretty quick with support emails.
Here are the hardware requirements/compatibility listed on the Dosbox page itself. I imagine our android devices need a little more juice than the desktop CPU equivalents.
http://www.dosbox.com/wiki/System_Requirements
Code:
Host Architecture Host CPU Speed = Equivalent to Emulated CPU Class (dynamic core)
x86 (Pentium II) 400 MHz 386
x86 (Duron) 800 MHz 486
x86 (Pentium III) 1.0 GHz high-end 486
x86 (Intel Atom) 1.6 GHz high-end 486
x86 (Pentium 4) 3.0 GHz high-end Pentium Duke Nukem 3D tested, smooth at 640x480; Quake runs at ~40 frames per second in 320x200 mode. x86 (Pentium M) 1.8 GHz Pentium II
x86 (Athlon XP) 1.8 GHz Pentium II
x86 (Athlon 64) 1.8 GHz Pentium III
x86 (Core 2 Duo) (any speed) Pentium III
Apple G3 500 MHz 3/486-class Games tested: Leisure Suit Larry 6, Fuzzy's World of Miniature Space Golf. Extrapolated from ~50% CPU usage on a 1GHz G4. Apple G4 1.0 GHz 486-class Performance adequate for most DOS games. SVGA likely to be too much.
knoxploration said:
I'm curious -- how stable / compatible do you guys find Andosbox?
Click to expand...
Click to collapse
Very stable. One thing though - on TF you want to save before switching to other app. Because in background mode it just restart, most of the time, so after checking that new mail and switching back - you'll be greeted by command line in andosbox.
Maybe it's only on my TF though, if you guys got some workaround for that - would be nice to know. God I want app that can set other apps to not EVER going to background - let them eat memory, it's ok for me.
tixed said:
Very stable. One thing though - on TF you want to save before switching to other app. Because in background mode it just restart, most of the time, so after checking that new mail and switching back - you'll be greeted by command line in andosbox.
Maybe it's only on my TF though, if you guys got some workaround for that - would be nice to know. God I want app that can set other apps to not EVER going to background - let them eat memory, it's ok for me.
Click to expand...
Click to collapse
That's just a "feature" of Android, and one of the reasons I believe it needs proper multitasking. Android decides when programs get closed, not you. The same thing frequently causes me to lose a connection on a website chat client I have to use for work...
It's a good feature. The problems you see are because lazy programmers don't save the state when they should. (I'm guilty of it too in one of my game, which had to complicated state to easily save). Every time you leave a program for a while it should save the state it's in in order to restore it in case it's thrown out of memory.
The best solution for Google to fix that problem would be to add automatic save and restore - by saving the whole VM (it would be probably slow on some devices though).
Magnesus said:
It's a good feature. The problems you see are because lazy programmers don't save the state when they should. (I'm guilty of it too in one of my game, which had to complicated state to easily save). Every time you leave a program for a while it should save the state it's in in order to restore it in case it's thrown out of memory.
The best solution for Google to fix that problem would be to add automatic save and restore - by saving the whole VM (it would be probably slow on some devices though).
Click to expand...
Click to collapse
Lazy programmers like Google themselves? The stock Android web browser does it.
Sorry, but it's not a good feature if it can't be overridden by the user when it gets it wrong (as it always will). It's a bad feature.
knoxploration said:
Sorry, but it's not a good feature if it can't be overridden by the user when it gets it wrong (as it always will). It's a bad feature.
Click to expand...
Click to collapse
This. Exactly.
I know that it's a feature, but android (apart from iOS for example) is all about user control and user customizations. At least OS should save whole VM state when app is going to background, not depending on programmers.
Maybe they won't do that because it might affect speed, and android was called "sluggish" enough times already...
knoxploration said:
Lazy programmers like Google themselves? The stock Android web browser does it.
Click to expand...
Click to collapse
If you are talking about the slow, laggy and often crashing stock browser from HoneyComb, then I'd say, yes.
I think they should've just automated the process (by automatically hibernating the VMs of apps that need to be closed and resuming when user gets back to them). That would've solve this problem once and for all and made programmers happy.
[POST Deleted]
Guys, I've been trying out the various DosBox emulators available on the market.
I tried aDosBox, andoxbox and I just found a new one the other day called DosBox Turbo.
I figured I'd give DosBox Turbo a try with X-Com, cause its sluggish on the other emulators.. After playing X-Com for a while, I can say that DosBox Turbo is definitely faster than the others. The virtual joystick also supports multi-touch, which is a bonus.
I also tried C&C Red Alert, I had to change the memory limit to 8MB, but it too worked just fine.
Edit: Now has full Trackpad support in DOS Games! YES!! =)
gururise said:
Guys, I've been trying out the various DosBox emulators available on the market.
I tried aDosBox, andoxbox and I just found a new one the other day called DosBox Turbo.
I figured I'd give DosBox Turbo a try with X-Com, cause its sluggish on the other emulators.. After playing X-Com for a while, I can say that DosBox Turbo is definitely faster than the others. The virtual joystick also supports multi-touch, which is a bonus.
I also tried C&C Red Alert, I had to change the memory limit to 8MB, but it too worked just fine.
Click to expand...
Click to collapse
How do they emulate mouse clicks? Because in X-Com combat I think my volume up key would be broken very soon
tixed said:
gururise said:
Guys, I've been trying out the various DosBox emulators available on the market.
I tried aDosBox, andoxbox and I just found a new one the other day called DosBox Turbo.
I figured I'd give DosBox Turbo a try with X-Com, cause its sluggish on the other emulators.. After playing X-Com for a while, I can say that DosBox Turbo is definitely faster than the others. The virtual joystick also supports multi-touch, which is a bonus.
I also tried C&C Red Alert, I had to change the memory limit to 8MB, but it too worked just fine.
Click to expand...
Click to collapse
How do they emulate mouse clicks? Because in X-Com combat I think my volume up key would be broken very soon
Click to expand...
Click to collapse
There have been three updates in the past 3 days. I'm happy to say the latest one added FULL support for my transformer's trackpad. Left clicks work great on Honeycomb, with the right click mapped to the 'back' button as usual.
On my Transformer Prime with ICS, both left and right clicks are working great! So I guess if you are on Honeycomb, you'll get left click.. If you are on ICS, you'll get both Left & Right clicks. I've been playing lots of MOO2 and XCom lately! =)
Been playing around with DosBox Turbo the past week and its just amazing on the Transformer.
It's already up to v1.1.3 just in the last week, and the author has listened to feedback. Absolutely everything works on the Transformer. Trackpad performs flawlessly, I even have right click (using ICS). You can map the back key to escape, and its in the perfect location to use as an escape key. The search key can also be re-mapped.
Its significantly faster than adosbox and around 15% faster than andosbox and supports all the hardware on the transformer. I played Doom, warcraft, ufo, space quest and even got the 11th hour to work acceptably. Where the other dosbox emulators wouldn't even support the trackpad on the transformer, this one even worked perfectly with my external usb mouse (both left and right click!)

Dualcore processor processing

Hi,
I was wondering if the 2 CPU's are working simultaneously together? or I'st just 1?., I'm using FLEXREAPER X10 ICS 4.0.3 . Sometimes I get screen glitches .... when My tab is trying to sleep and I touched the screen. Also...when I try the benchmark it only say's the CPU1 processing speed... & etc. Also when I'm browsing in the Playstore the screen animation is a bit lag... Can some1 enlighten me...or is there an app for this? than can force 2 cpu to work all the time together.?
Yes, both cores are enabled at all times. But no, you cannot make an application use both cores unless the application was designed to do so.
FLEXREAPER X10 ICS 4.0.3 base a leak rom ICS, not a stable rom, so it has some problems.
Your benchmark is correct.
There are NOT 2 CPU's. There is only one CPU, with 2 cores. It doesn't process two applications at once, it CAN process two threads of the same application at the same time. Think of it as this: two CPUs would be two people writing on different pieces of paper.A single CPU with two cores would be one person writing with both hands at the same time. He can only write on the same piece of paper, but it's faster then it would be if he was writing with only one hand.
Note: this is not related to multi-task. Multi-tasking works based on processing a little bit of each app at a time, so altough it may seen that both are running at the same time, it is not.
Most apps are not designed to work with threads though, so there's your (actually, our) problem. But this is not an A500 problem, it applies to any multi-core processor based devices ou there (including desktops).
danc135 said:
There are NOT 2 CPU's. There is only one CPU, with 2 cores
Click to expand...
Click to collapse
Essentially true, but...
It doesn't process two applications at once
Click to expand...
Click to collapse
False. Two cores is just two CPUs on the same die.
Thanks for the response guys... I'm getting bit confused with this "multi-core processor".... I was expecting that it is fast to no lag, during browsing apps in my lib,switching application, even browsing in The PlAYSTORE". So It's correct to say that multi-core processor is a bit of a waste if an app can't use it's full/all cores potential? Also if the UI of an OS can't use all cores at the same time?
Dual Core, Dual CPU....
Not entirely, because if the kernel is capable of multi-threading, then it can use one core to run services while another is running the main application. The UI is only another application running on top of the kernel...
The only difference between a dual core Intel cpu and a dual core tegra 2 is the instruction set and basic capabilities, otherwise they can be thought of as essentially the same animal. The kernel, which is the core of the OS, handles the multi-tasking, but android has limited multi-tasking capabilities for Applications. Even so, services that run in the background are less of a hindrance on a dual core cpu than a single core one, and more and more applications are being written to take advantage of multiple cores.
Just have a bunch of widgets running on your UI, and you are looking at multi-tasking and multi-threading. Which are both better on multi-core processors.
A multiple core cpu are not more then one processor stacked on one die. They thread and load balance thru software.Applications MUST BE AWARE Of multi core cpus to take advantage of the dual cores.
A multiple Processor computer has a 3rd processor chip on the main board. this chip balances the load on hardware. this does not add over head on the processors. as on a Dual multi CORE CHIP. has a much higher load overhead.
SO Many people confuse the two. This is due to how the companies market the muticore cpu devices .
So a application that can not thread itself on a multi core chip will run in one of the cpu cores. a threaded app can well guess?
a dual Processor computer can run non multi thread aware app or program on two cores..
Its quite simply complicated..
You can throw all the hardware you want at a system. In the end, if the software sucks (not multi-threaded, poorly optimized, bad at resource management, etc...), it's still going to perform bad.
Dual core doesn't mean it can run one applicaton at twice speed, it means that it can run two applications at full speed, given that they're not threaded. Android's largely meant to run one application foregrounded, and since they can't magically make every application multi-threaded, you won't be seeing the benefits of multiple cores as much as you will on a more traditional platform.
Also, a dual-core tegra 2 is good, but only in comparison to other ARM processors (and even then, it's starting to show its age.) It's going to perform poorly compared to a full x86 computer, even one that's older.
netham45 said:
You can throw all the hardware you want at a system. In the end, if the software sucks (not multi-threaded, poorly optimized, bad at resource management, etc...), it's still going to perform bad.
Dual core doesn't mean it can run one applicaton at twice speed, it means that it can run two applications at full speed, given that they're not threaded. Android's largely meant to run one application foregrounded, and since they can't magically make every application multi-threaded, you won't be seeing the benefits of multiple cores as much as you will on a more traditional platform.
Also, a dual-core tegra 2 is good, but only in comparison to other ARM processors (and even then, it's starting to show its age.) It's going to perform poorly compared to a full x86 computer, even one that's older.
Click to expand...
Click to collapse
This is so true . With the exception of a TRUE Server dual OR Quad processor computer.. There is a special on board chip that will thread application calls to balance the load for non threaded programs and games..My first dual processor computer was a amd MP3000 back when dual cpu computers started to be within user price ranges. Most applications/programs did not multi thread .
And yes as far as computer speed and performance you will not gain any from this. but only will feel less lag when running several programs at once.a 2.8 ghz dual processor computer still runs at 2.8 not double that.
erica_renee said:
With the exception of a TRUE Server dual OR Quad processor computer.. There is a special on board chip that will thread application calls to balance the load for non threaded programs and games..
Click to expand...
Click to collapse
Actually this is incorrect. All such decisions are left to the OS's own scheduler, for multiple reasons: the CPU cannot know what kind of tasks it is to run, what should be given priority under which conditions and so on, like e.g. on a desktop PC interactive, user-oriented and in-focus applications and tasks are usually given more priority than background-tasks, whereas on a server one either gives all tasks similar priority or handles tasks priorities based on task-grouping. Not to mention realtime operating system which have entirely different requirements altogether.
If it was left to the CPU the performance gains would be terribly limited and could not be adjusted for different kinds of tasks and even operating systems.
(Not that anyone cares, I just thought to pop in and rant a little...)
Self correction
I said a multi-core processor only runs threads from the same process. That is wrong (thanks to my Computer Architecture professor for misleading me). It can run multiple threads from different processes, which would constitute true parallel processing. It's just better to stick with same process threads because of shared memory within the processor. Every core has its own cache memory (level 1 caches), and shared, on-die level 2 caches.
It all depends on the OS scheduler, really.
With ICS (and future Android versions), I hope the scheduler will improve to get the best of multi-core.
In the end though, it won't matter if applications aren't multi-thread (much harder to code). What I mean is, performance will be better, but not as better as it could be if developers used a lot of multi-threading.
To answer hatyrei's question, yes, it is a waste, in the sense that it has too much untapped potential.

Android and Multi-Core Processor

Bell points the finger at chipset makers - "The way it's implemented right now, Android does not make as effective use of multiple cores as it could, and I think - frankly - some of this work could be done by the vendors who create the SoCs, but they just haven't bothered to do it. Right now the lack of software effort by some of the folks who have done their hardware implementation is a bigger disadvantage than anything else."
Click to expand...
Click to collapse
What do you think about this guys?
He knows his stuff.
Sent from my GT-I9300
i would take it with a pinch of salt, though there are not many apps that takes advantage of multi core processor lets see what intel will tell when they have thier own dual core processor out in the market
Pretty good valid arguments for the most part.
I mostly agree though, but I think android makes good use of up to 2 cores. Anything more than that it doesn't at all.
There is a huge chunk of the article missing too.
Sent from my GT-I9300
full article
jaytana said:
What do you think about this guys?
Click to expand...
Click to collapse
I think they should all be covered in honey and then thrown into a pit full of bears and Honey bees. And the bears should have like knives ductaped to their feet and the bees stingers should be dipped in chilli sauce.
Reckless187 said:
I think they should all be covered in honey and then thrown into a pit full of bears and Honey bees. And the bears should have like knives ductaped to their feet and the bees stingers should be dipped in chilli sauce.
Click to expand...
Click to collapse
wow, saying Android isn't ready for multip-core deserves such treatment? or this guy had committed more serious crime previously?
Actually is a totally fail but in android 5 I think it's can be solved
Sent from my GT-I9300 using XDA
This was a serious problem on desktop Windows OS as well back when multi cores first starting coming out. I remember having to download patches for certain games and in other cases, having to set the CPU affinity to run certain games/apps with only one core so that it wouldn't freeze up. I am sure Android will move forward with multi-core support in the future.
simollie said:
wow, saying Android isn't ready for multip-core deserves such treatment? or this guy had committed more serious crime previously?
Click to expand...
Click to collapse
Its a harsh but fair punishment imo. They need to sort that sh*t out as its totally unacceptable or they're gonna get a taste of the Cat o Nine Tails.
Android kernel is based on Linux. So this is suggesting the Linux kernel is not built to support multi-core either. Not true. There is a reason the SGS3 gets 5000+ in Quadrant, the the San Diego only gets 3000+. And the San Diego is running 200MHz faster.
Just look at the blue bar here. http://www.engadget.com/2012/05/31/orange-san-diego-benchmarks/ . My SGS3 got over 2.5K on just CPU alone.
What Intel said was true. Android is multicore aware but the os and apps aren't taking advantage of it. When this user disabled 2 cores on the HTC one x it made no difference at all in anything other than benchmarks.
http://forum.xda-developers.com/showpost.php?p=26094852&postcount=3
Disabling the CPU cores will do nothing to the GPU, hence still getting 60 FPS. And you say that like you expected to see a difference. Those games may not be particularly CPU intensive, thats why they continue to run fine. They will more than likely be GPU limited.
Android is not a difficult OS to run, thats why it can run on the G1, or AOKP can run smooth as silk on my i9000. If it can run smooth as silk on one 2yr old 1GHz chip, how COULD it go faster on a next-gen chip like in the SGS3 or HOX? In terms of just using the phone, ive not experienced any lag at all.
If youre buying a phone with dual/quad CPU cores, and only expecting to use it as a phone (i.e, not play demanding games/benchmark/mod/what ever else), of course you wont see any advantage, and you may feel cheated. And if you disable those extra cores, and still only use it as a phone, of course you wont notice any difference.
If a pocket calculator appears to calculate 1+1 instantly, and a HOX also calculates 1+1 instantly, Is the pocket calculator awesome, is the HOX not using all its cores, or is what it is being asked to do simply not taxing enough to use all the CPU power the HOX has got?
I've been hearing this for some time now and is one of the reasons I didn't care that we weren't getting the quad core version of the GS3
916x10 said:
I've been hearing this for some time now and is one of the reasons I didn't care that we weren't getting the quad core version of the GS3
Click to expand...
Click to collapse
Okay folks... firstly linux kernel, which android is based on, is aware of multicore (its obvious) but most the applications are not aware, thats true!.. but is not the android which to blame neither the SoC makers. This is like the flame intel made that they wanted to say their single core can do faster to a dual core arm LOL, (maybe intel will make 1 core has 4 threads or 8 threads) <- imposibruuu for now dunno later
you will notice the core usage while playing HD video that require cpu to decode (better core decode fastly)... and im not sure single core intel does better to arm dual core.. ~haha~
but for average user the differences are not noticable.. if intel aiming for this market yes that make sense... but android user are above average user.. they will optimize its phone eventually IMO
What they have failed to disclose is which SoC they did their test on and their methodology. Not much reason to doubt what he's saying but you gotta remember that Intel only have a single core mobile SoC currently and are aiming to get a foothold in the mobile device ecosystem so part of this could be throwing salt on competing products as it's something that should be taken care of by Google optimising the CPU scheduling algorithms of their OS.
The problem is in the chip set. I currently attend SUNY Oswego and a professor of mine Doug Lea works on many concurrent structures. He is currently working on the ARM spec sheet that is used to make chips. The bench marks that he has done shows that no matter how lucky or unlucky you get, the time that it takes to do a concurrent process is about the same where on desktop chips there is a huge difference between best case and worse case. The blame falls on the people that make the chips for now. They need to change how it handles concurrent operations and then if android still cant use multi-core processors then it falls on the shoulders of google.
that is my two cents on the whole situation. Just finished concurrency with Doug and after many talks this is my current opinion.
Sent from my Transformer Prime TF201 using XDA
Flynny75 said:
Disabling the CPU cores will do nothing to the GPU, hence still getting 60 FPS. And you say that like you expected to see a difference. Those games may not be particularly CPU intensive, thats why they continue to run fine. They will more than likely be GPU limited.
Android is not a difficult OS to run, thats why it can run on the G1, or AOKP can run smooth as silk on my i9000. If it can run smooth as silk on one 2yr old 1GHz chip, how COULD it go faster on a next-gen chip like in the SGS3 or HOX? In terms of just using the phone, ive not experienced any lag at all.
If youre buying a phone with dual/quad CPU cores, and only expecting to use it as a phone (i.e, not play demanding games/benchmark/mod/what ever else), of course you wont see any advantage, and you may feel cheated. And if you disable those extra cores, and still only use it as a phone, of course you wont notice any difference.
If a pocket calculator appears to calculate 1+1 instantly, and a HOX also calculates 1+1 instantly, Is the pocket calculator awesome, is the HOX not using all its cores, or is what it is being asked to do simply not taxing enough to use all the CPU power the HOX has got?
Click to expand...
Click to collapse
That doesn't mean daily task doesn't need the cpu power. When I put my sgs 3 in power save mode which cut back the cpu to 800mHz, I feel the lag instantly when scrolling around and navigating the internet. So I can conclude that performance per core is still much more important than number of cores. There isn't any performance difference either with the dual core sensation xe running beside the single core sensational xl.
The hardware needs to be out for developers to have incentive to make use of it. It's not like Android was built from the ground up to utilize 4 cores. That said, once it hits enough hand it and software running in it will be made to utilize the new hardware.

[Q] Just bought TF101, My Adventure in Rooting & Jelly Bean

So, nabbed an Asus Transformer TF101 off Craigslist for $225. With keyboard. Feeling good about that.
Next question for semi-geeky me. To leave a decent Ice Cream Sandwich -- the ASUS-approved version -- on the tablet or to venture into the unknown world of rooting and custom Jelly Bean ROMs? Sheesh. I tried to resist. But... just... could... NOT...
Did I mention I didn't have a Windows PC to make rooting a bit easier? That left me with the need to do it via an iMac. I've gone and lost that url, but think it is one of the pages on this site.
From there, how do I pick a ROM? All sorts of threads, all of 'em messy (at least to the noob in the room). So I noted EOS got a lot of uptalk and went that route. After more than few false starts (manually typing in command lines kept introducing unintentional HUMAN ERROR into the mix) I got lift-off.
Did I mention my wife owns a Nexus 7 (one of the nicest little bits of hardware/software I know of... no no, the Nexus, not her!! She's stellar, but I digress).
In light of the Nexus' buttery feel, I was hoping for similar from my Asus Transformer. Well, not quite. Maybe the dual vs quad core chip has something to do with that. But I do very much like my larger (10") screen vs. her 7" and the keyboard... and Jelly Bean seems pretty darn nice even on a dual core tegra chip. Still hoping for a little more butter as the EOS nightly people do their thing. (I thank and thank them!!)
Oh. EOS answered my next problem before I got to it. How to overclock? Right in the setup I can do it.... tried all sorts of settings there and ended up with backing it off to only 1.2 (from 1.0) ghz. Not a game-player, just a blogger. Downloaded all my favorite apps -- kindle reader, YouVersion Bible, Skype, and so on. Oh, and of course some board games so I can play 'em with my Dearling.
Last night EOS suddenly updates my gapps. Hmmm. No big change, except maybe things are slightly snappier?
Questions I still have:
I installed an older (I think) booter/recovery module (or whatever the heck it is called). "Team Rogue" "Rogue XM Recovery 1.5.0 (CWM-Based Recovery v5.0.2.8)"
This recovery does not let me write to my external SD card (or even read from it) but will write / read to a USB stick if I mount it of course via their menu. My question:
Is this the newest and best boot/recovery tool? And if not, how and to what tool should I upgrade/switch?
Really enjoying my toy.
I've come up with a few more questions of a semi-general nature... but perhaps overly technical. If wrongly posted here, please advise...
Why does the Tegra 2 chip in my TF101 apparently change speeds and therefore frequencies? Using the setup app in EOS's version of Jelly Bean, one can alter two frequency / speed settings -- minimum and maximum -- and I'm thinking that is one frequency per core?
The reason that matters is because I'm experiencing an occasional spontaneous reboot. My settings were at 216 MHz (minimum) and 1200 MHz (maximum). I'm in over my head at this point as far as knowing if the lower value in particular is too low.
Anyone else have any thoughts?
Thanks.
shonkin said:
I've come up with a few more questions of a semi-general nature... but perhaps overly technical. If wrongly posted here, please advise...
Why does the Tegra 2 chip in my TF101 apparently change speeds and therefore frequencies? Using the setup app in EOS's version of Jelly Bean, one can alter two frequency / speed settings -- minimum and maximum -- and I'm thinking that is one frequency per core?
The reason that matters is because I'm experiencing an occasional spontaneous reboot. My settings were at 216 MHz (minimum) and 1200 MHz (maximum). I'm in over my head at this point as far as knowing if the lower value in particular is too low.
Anyone else have any thoughts?
Thanks.
Click to expand...
Click to collapse
The 216mhz is the slowest speed your CPU will go on both cores. This could cause reboots if too low because the operating system crashes because it cannot get everything done it needs / wants to. Try to up it to 500 and play around with the value so you dont get reboots, low mhz is better for battery when in deep sleep etc but can become unstable.
The 1200 mhz could also cause reboots if too high, however I don't think that sounds high, some go as high as 1500 or 1600 so that is probably not the issue.
The mhz, either min or max, applies to both cores equally on tegra 2.
Your wife's nexus has 4 cores and a single ninja core for background activity, so on hers, you can set min and max for the 4 cores and the ninja core seperately.
Hope that helps!
gunswick said:
The 216mhz is the slowest speed your CPU will go on both cores. This could cause reboots if too low because the operating system crashes because it cannot get everything done it needs / wants to. Try to up it to 500 and play around with the value so you dont get reboots, low mhz is better for battery when in deep sleep etc but can become unstable.
The 1200 mhz could also cause reboots if too high, however I don't think that sounds high, some go as high as 1500 or 1600 so that is probably not the issue.
The mhz, either min or max, applies to both cores equally on tegra 2.
Your wife's nexus has 4 cores and a single ninja core for background activity, so on hers, you can set min and max for the 4 cores and the ninja core seperately.
Hope that helps!
Click to expand...
Click to collapse
The EOS (#79) Rom's latest update seems to have helped some... along with my using SETCPU (which may or may not be more effficient but was suggested to me by another poster here).
I'm running with the very low 216mhz still, but have upped the max all the way to 1600mhz. So far, no spontaneous reboots like before even when running angry bird, a browser, and other junk. I used an app (SETCPU) to create a battery charging profile that allows for the tablet to run between 1200 and 1600 when plugged in and charging. More to see if it worked than because I have any real need for it.... and it did work just fine.
But I appreciate the comment re 216 being really low. And if it does exhibit strange behavior again, I'll monkey with the low setting to see if it helps.

Do you overclock your N7?

Do you?
Do you keep it overckocked for a longer period, permanently, or just when/while you need it? How much (exact frequencies would be cool) I'm thinking of OCing mine (both CPU and GPU) since some games like NOVA 3 lag on occasions but not sure how safe/advisable it is.
Sent from my Nexus 7 using Tapatalk HD
I don't think it's needed. I've heard that OC won't help much with gaming, but you can definitely try
I don't yet - I might later. My N7 is still less than a month old.
The device manufacturers (e.g. Asus in this case) have motivations to "not leave anything on the table" when it comes to performance. So, you have to ask yourself - why would they purposely configure things to go slowly?
After all, they need to compete with other handset/tablet manufacturers, who are each in turn free to go out and buy the exact same Tegra SoC (processor) from Nvidia.
At the same time, they know that they will manufacture millions of units, and they want to hold down their product outgoing defect levels and in-the-field product reliability problems to an acceptable level. If they don't keep malfunctions and product infant mortality down to a fraction of a percent, they will suffer huge brand name erosion problems. And that will affect not only sales of the current product, but future products too.
That means that they have to choose a conservative set of operating points which will work for 99+ % of all customer units manufactured across all temperature, voltage, and clock speed ranges. (BTW, Note that Asus didn't write the kernel EDP & thermal protection code - Nvidia did; that suggests that all the device manufacturers take their operating envelope from Nvidia; they really don't even want to know where Nvidia got their numbers)
Some folks take this to mean that the vast majority of units sold can operate safely at higher speeds, higher temperatures, or lower voltages, given that the "as shipped" configuration will allow "weak" or "slow" units to operate correctly.
But look, it's not as if amateurs - hacking kernels in their spare time - have better informed opinions or data about what will work or won't work well across all units. Simply put, they don't know what the statistical test properties of processors coming from the foundry are - and certainly can't tell you what the results will be for an individual unit. They are usually smart folks - but operating completely in the dark in regards to those matters.
About the only thing which can be said in a general way is that as you progressively increase the clock speed, or progressively weaken the thermal regulation, or progressively decrease the cpu core voltage stepping, your chances of having a problem with any given unit (yours) increase. A "problem" might be (1) logic errors which lead to immediate system crashes or hangs, (2) logic errors (in data paths) that lead to data corruption without a crash or (3) permanent hardware failure (usually because of thermal excursions).
Is that "safe"?
Depends on your definition of "safe". If you only use the device for entertainment purposes, "safe" might mean "the hardware won't burn up in the next 2-3 years". Look over in any of the kernel threads - you'll see folks who are not too alarmed about their device freezing or spontaneously rebooting. (They don't like it, but it doesn't stop them from flashing dev kernels).
If you are using the device for work or professional purposes - for instance generating or editing work product - then "safe" might mean "my files on the device or files transiting to and from the cloud won't get corrupted", or "I don't want a spontaneous kernel crash of the device to cascade into a bricked device and unrecoverable files". For this person, the risks are quite a bit higher.
No doubt some tool will come in here and say "I've been overclocking to X Ghz for months now without a problem!" - as if that were somehow a proof of how somebody else's device will behave. It may well be completely true - but a demonstration on a single device says absolutely nothing about how someone else's device will behave. Even Nvidia can't do that.
There's a lot of pretty wild stuff going on in some of the dev kernels. The data that exists as a form of positive validation for these kernels is a handful of people saying "my device didn't crash". That's pretty far removed from the rigorous testing performed by Nvidia (98+% fault path coverage on statistically significant samples of devices over temperature, voltage, and frequency on multi-million dollar test equipment.)
good luck!
PS My phone has it's Fmax OC'ed by 40% from the factory value for more than 2 years. That's not a proof of anything really - just to point out that I'm not anti-OC'ing. Just trying to say - nobody can provide you any assurances that things will go swimmingly on your device at a given operating point. It's up to you to decide whether you should regard it as "risky".
Wow thanks for your educational response, I learned something. Great post! I will see if I will over clock it or not since I can play with no problems at all, it is just that it hics up when there is too much stuff around. Thanks again!
Sent from my Nexus 7 using Tapatalk HD
With the proper kernel its really not needed. Havent really seen any difference,aside from benchmark scores(which can be achieved without oc'ing)
Sent from my Nexus 7 using XDA Premium HD app
Yes, I run mine at 1.6 peak.
I've come to the Android world from the iOS world - the world of the iPhone, the iPad, etc.
One thing they're all brilliant at is responsive UI. The UI, when you tap it, responds. Android, prior to 4.1, didn't.
Android, with 4.1 and 4.2, does. Mostly.
You can still do better. I'm running an undervolted, overclocked M-Kernel, with TouchDemand governor, pushing to 2 G-cores on touch events.
It's nice and buttery, and renders complex PDF files far faster than stock when the cores peak at 1.6.
I can't run sustained at 1.6 under full load - it thermal throttles with 4 cores at 100% load. But I can get the peak performance for burst demands like page rendering, and I'm still quite efficient on battery.
There's no downside to running at higher frequencies as long as you're below stock voltages. Less heat, more performance.
If you start pushing the voltages past spec, yeah, you're likely into "shortening the lifespan." But if you can clock it up, and keep the voltages less than the stock kernel, there's really not much downside. And the upside is improved page rendering, improved PDF rendering, etc.
Gaming performance isn't boosted that much as most games aren't CPU bound. That said, I don't game. So... *shrug*.
Bitweasil said:
I can't run sustained at 1.6 under full load - it thermal throttles with 4 cores at 100% load.
Click to expand...
Click to collapse
@Bitweasil
Kinda curious about something (OP, allow me a slight thread-jack!).
in an adb shell, run this loop:
# cd /sys/kernel/debug/tegra_thermal
# while [ 1 ] ; do
> sleep 1
> cat temp_tj
> done
and then run your "full load".
What temperature rise and peak temperature do you see? Are you really hitting the 95C throttle, or are you using a kernel where that is altered?
I can generate (w/ a mutli-threaded native proggy, 6 threads running tight integer loops) only about a 25C rise, and since the "TJ" in mine idles around 40C, I get nowhere near the default throttle temp. But I am using a stock kernel, so it immediately backs off to 1.2 Ghz when multicore comes on line.
Same sort of thing with Antutu or OpenGL benchmark suites (the latter of which runs for 12 minutes) - I barely crack 60C with the stock kernel.
?
bftb0
The kernel I'm using throttles around 70C.
I can't hit that at 1200 or 1300 - just above that I can exceed the temps.
I certainly haven't seen 95C.
M-Kernel throttles down to 1400 above 70C, which will occasionally get above 70C at 1400, but not by much.
Bitweasil said:
The kernel I'm using throttles around 70C.
I can't hit that at 1200 or 1300 - just above that I can exceed the temps.
I certainly haven't seen 95C.
M-Kernel throttles down to 1400 above 70C, which will occasionally get above 70C at 1400, but not by much.
Click to expand...
Click to collapse
Thanks. Any particular workload that does this, or is the throttle pretty easy to hit with arbitrary long-running loads?
Odp: Do you overclock your N7?
I'll never OC a quadcore phone/tablet, I'm not stupid. This is enough for me.
Sent from my BMW E32 using XDA App
I've over clocked my phone, but not my N7. I've got a Galaxy Ace with a single core 800MHz processor OC'd to 900+. The N7 with its quad core 1.3GHz is more than enough for doing what I need it to do. Using franco.Kernel and everything is smooth and lag-free. No need for me to overclock
Sent From My Awesome AOSPA3.+/franco.Kernel Powered Nexus 7 With XDA Premium
Impossible to do so can't even get root but did manage to unlock the bootloader
Sent from my Nexus 7 using xda app-developers app
CuttyCZ said:
I don't think it's needed. I've heard that OC won't help much with gaming, but you can definitely try
Click to expand...
Click to collapse
I'm not a big OC'er, but I do see a difference in some games when I OC the GPU. It really depends on the game and what is the performance bottleneck. If the app is not Kernel bound than an OC won't make much difference. Must games are I/O and GPU bound.
Sent from my N7 using XDA Premium
Dirty AOKP 3.5 <&> m-kernel+ a34(t.10)
I've overclocked all of my devices since my first HTC hero. I really don't see a big deal with hardware life.
I know that this n7 runs games better at 1.6ghz than at 1.3ghz.
First thing I do when I get a new device is swap recovery and install aokp with the latest and greatest development kernel. Isn't that why all this great development exists? For us to make our devices better and faster? I think so. I'd recommend aokp and m-kernel to every nexus 7 owner. I wish more people would try non-stock.
scottx . said:
I've overclocked all of my devices since my first HTC hero. I really don't see a big deal with hardware life.
I know that this n7 runs games better at 1.6ghz than at 1.3ghz.
First thing I do when I get a new device is swap recovery and install aokp with the latest and greatest development kernel. Isn't that why all this great development exists? For us to make our devices better and faster? I think so. I'd recommend aokp and m-kernel to every nexus 7 owner. I wish more people would try non-stock.
Click to expand...
Click to collapse
Do you mean the pub builds of AOKP? Or Dirty AOKP
Ty
bftb0 said:
Thanks. Any particular workload that does this, or is the throttle pretty easy to hit with arbitrary long-running loads?
Click to expand...
Click to collapse
Stability Test will do it reliably. Other workloads don't tend to run long enough to trigger it that I've seen.
And why is a quadcore magically "not to be overclocked"? Single threaded performance is still a major bottleneck.
Bitweasil said:
Stability Test will do it reliably. Other workloads don't tend to run long enough to trigger it that I've seen.
And why is a quadcore magically "not to be overclocked"? Single threaded performance is still a major bottleneck.
Click to expand...
Click to collapse
Hi Bitweasil,
I fooled around a little more with my horrid little threaded cpu-blaster code. Combined simultaneously with something gpu-intensive such as the OpenGL ES benchmark (which runs for 10-12 minutes), I observed peak temps (Tj) of about 83C with the stock kernel. That's a ridiculous load, though. I can go back and repeat the test, but from 40C it probably takes several minutes to get there. No complaints about anything in the kernel logs other than the EDP down-clocking, but that happens just as soon as the second cpu comes on line, irrespective of temperature. With either of the CPU-only or GPU-only stressors, the highest I saw was a little over 70C. (But, I don't live in the tropics!)
To your question - I don't think there is much risk of immediate hardware damage, so long as bugs don't creep into throttling code, or kernel bugs don't cause a flaw that prevents the throttling or down-clocking code from being serviced while the device is running in a "performance" condition. And long-term reliability problems will be no worse if the cumulative temperature excursions of the device are not higher than what than what they would be using stock configurations.
The reason that core voltages are stepped up at higher clock rates (& more cores online) is to preserve both logic and timing closure margins across *all possible paths* in the processor. More cores running means that the power rails inside the SoC package are noisier - so logic levels are a bit more uncertain, and faster clocking means there is less time available per clock for logic levels to stabilize before data gets latched.
Well, Nvidia has reasons for setting their envelope the way they do - not because of device damage considerations, but because they expect to have a pretty small fraction of devices that will experience timing faults *anywhere along millions of logic paths* under all reasonable operating conditions. Reducing the margin, whether by undervolting at high frequencies, or increasing max frequencies, or allowing more cores to run at peak frequencies will certainly increase the fraction of devices that experience logic failures along at least one path (out of millions!). Whether or not OC'ing will work correctly on an individual device can not be predicted in advance; the only thing that Nvidia can estimate is a statistical quantity - about what percent of devices will experience logic faults under a given operating conditon.
Different users will have different tolerance for faults. A gamer might have very high tolerance for random reboots, lockups, file system corruption, et cetera. Different story if you are composing a long email to your boss under deadline and your unit suddenly turns upside down.
No doubt there (theoretically) exists an overclocking implementation where 50% of all devices would have a logic failure within (say) 1 day of operation. That kind of situation would be readily detected in a small number of forum reports. But what about if it were a 95%/5% situation? One out of twenty dudes report a problem, and it is dismissed with some crazy recommendation such as "have you tried re-flashing your ROM?". And fault probability accumulates with time, especially when the testing loads have very poor path coverage. 5% failure over one day will be higher over a 30 day period - potentially much higher.
That's the crux of the matter. Processor companies spend as much as 50% of their per-device engineering budgets on test development. In some cases they actually design & build a second companion processor (that rivals the complexity of the first!) whose only function is to act as a test engine for the processor that will be shipped. Achieving decent test coverage is a non-trivial problem, and it is generally attacked with extremely disciplined testing over temperature/voltage/frequency with statistically significant numbers of devices - using test-vector sets (& internal test generators) that are known to provide a high level of path coverage. The data that comes from random ad-hoc reports on forums from dudes running random applications in an undisciplined way on their OC'ed units is simply not comparable. (Even "stressor" apps have very poor path coverage).
But, as I said, different folks have different tolerance for risk. Random data corruption is acceptable if the unit in question has nothing on it of value.
I poked my head in the M-kernel thread the other day; I thought I saw a reference to "two units fried" (possibly even one belonging to the dev?). I assume you are following that thread ... did I misinterpret that?
cheers
I don't disagree.
But, I'd argue that the stock speeds/voltages/etc are designed for the 120% case - they're supposed to work for about 120% of shipped chips. In other words, regardless of conditions, the stock clocks/voltages need to be reliable, with a nice margin on top.
Statistically, most of the chips will be much better than this, and that's the headroom overclocking plays in.
I totally agree that you eventually will get some logic errors, somewhere, at some point. But there's a lot of headroom in most devices/chips before you get to that point.
My use cases are heavily bursty. I'll do complex PDF rendering on the CPU for a second or two, then it goes back to sleep while I read the page. For this type of use, I'm quite comfortable with having pushed clocks hard. For sustained gaming, I'd run it lower, though I don't really game.

Categories

Resources