Will the new Android 4.4 features for battery saving work on the N4? - Nexus 4 Q&A, Help & Troubleshooting

Hi,
I read about Android 4.4's new features and found two that are of interest for the battery life.
The first one is this one:
With the new version of the operating system, Google allows audio to be tunneled straight to the digital signal processor on the smartphone's chipset. This means that the DSP takes care of audio decoding and output effects, which reduces CPU usage and increases battery life as a result.
Click to expand...
Click to collapse
So my question #1:
Since all Qualcomm SOCs traditionally have a DSP, do you think it will work for the N4, too?
The second one I found interesting:
Android 4.4 introduces platform support for hardware sensor batching, a new optimization that can dramatically reduce power consumed by ongoing sensor activities.
With sensor batching, Android works with the device hardware to collect and deliver sensor events efficiently in batches, rather than individually as they are detected. This lets the device’s application processor remain in a low-power idle state until batches are delivered. You can request batched events from any sensor using a standard event listener, and you can control the interval at which you receive batches. You can also request immediate delivery of events between batch cycles.
Click to expand...
Click to collapse
That leads me to my second question:
I think this one will work on any phone? Or did I read it wrong.
Regards
user

Yes as far as I know from my research the Snapdragon S4 Pro, does have low power processing also, so all should work with the Nexus 4, when they finally release 4.4

Related

Samsung Orion Dual-Core 1GHz Chipset Revealed, Expects to Ship 10 Million Galaxy Tabs

In another round of press releases from the Korean technology company today, Samsung’s announcing the existence of their 1GHz dual-core chipset based on the ARM Cortex A9, being named by them the Orion. This is looking to be the official successor to the 1GHz single-core Hummingbird chipset (based on ARM Cortex A8) seen in their phones today as Samsung’s already expressed plans to introduce the successor to the Samsung Galaxy S smartphones sometime in 2011. I’d bet money that they’ll be equipped with these beasts.
What will the Orion bring, anyway? 1080p video decoding and encoding (playback and recording), an on-chip HDMI 1.3a interface, embedded GPS, and a triple display controller to work alongside that HDMI interface (meaning you could possibly use your phone while a video is playing in high definition through HDMI on your television).
It’s said that the Orion will deliver 5x the 3D performance over the previous generation from Samsung, but they didn’t go into specifics regarding the GPU they’ll be using. It’s also being designed on a 45nm low-power die, meaning battery life might not take a hit compared to the relatively weaker chipsets of today. The chipset should be shipping later this year to select manufacturers.
Samsung’s also expecting to ship 10 million Galaxy Tabs worldwide, according to the Wall Street Journal. That’s an ambitious goal up against the iPad, but who are we to say Samsung can’t meet it? They’re doing just as well as they said they would in the smartphone market with the Galaxy S, and while we can’t judge performance between two different markets, we won’t count them out at all. Read on for the full press details.
More HERE
Samsung Introduces High Performance, Low Power Dual CORTEXTM – A9 Application Processor for Mobile Devices
TAIPEI, Taiwan–(BUSINESS WIRE)–Samsung Electronics Co., Ltd., a world leader in advanced semiconductor solutions, today introduced its new 1GHz ARM® CORTEXTM A9-based dual-core application processor, codenamed Orion, for advanced mobile applications. Device OEM developers now have a powerful dual processor chip platform designed specifically to meet the needs of high-performance, low-power mobile applications including tablets, netbooks and smartphones. Samsung’s new processor will be demonstrated at the seventh annual Samsung Mobile Solutions Forum held here in Taiwan at the Westin Taipei Hotel.
“Consumers are demanding the full web experience without compromise while on the go,” said Dojun Rhee, vice president of Marketing, System LSI Division, Samsung Electronics. “Given this trend, mobile device designers need an application processor platform that delivers superb multimedia performance, fast CPU processing speed, and abundant memory bandwidth. Samsung’s newest dual core application processor chip is designed specifically to fulfill such stringent performance requirements while maintaining long battery life.”
Designed using Samsung’s 45 nanometer low-power process technology, Orion features a pair of 1GHz ARM Cortex A9 cores, each comes with a 32KB data cache and a 32KB instruction cache. Samsung also included a 1MB L2 cache to optimize CPU processing performance and provide fast context switching in a multi-tasking environment. In addition, the memory interface and bus architecture of Orion supports data intensive multimedia applications including full HD video playback and high speed 3D action games.
Samsung’s new application processor incorporates a rich portfolio of advanced multimedia features implemented by hardware accelerators, such as video encoder/decoder that supports 30fps video playback and recording at 1080P full HD resolution. Using an enhanced graphics processing unit (GPU), the new processors are capable of delivering 5 times the 3D graphics performance over the previous processor generation from Samsung.
For design flexibility and system BOM cost reduction, Orion integrates a set of interfaces commonly used in mobile devices to configure various peripheral functionalities. For example, with this processor, customers have the choice to use different types of storage including NAND flash, moviNANDTM, SSD or HDD providing both SATA, and eMMC interfaces. Customers can also choose their appropriate memory options including low power LPDDR2 or DDR3, which is commonly used for high performance. In addition, a global positioning system (GPS) receiver baseband processor is embedded in the processor to seamlessly support location based services (LBS), which is critical in many emerging mobile applications.
Orion features an onboard native triple display controller architecture that compliments multi-tasking operations in a multiple display environment. A mobile device using the Orion processor can simultaneously support two on-device display screens, while driving a third external display such as a TV or a monitor, via an on-chip HDMI 1.3a interface.
Orion is designed to support package-on-package (POP) with memory stacking to reduce the footprint. A derivative of Orion, which is housed in a standalone package with a 0.8mm ball pitch, is also available.
Samsung’s new dual-core application processor, Orion, will be available to select customers in the fourth quarter of 2010 and is scheduled for mass production in the first half of 2011.
Click to expand...
Click to collapse
more information
Good info, but I have never been a fan of Tabs, to me I can see their purpose and a big part of me sees them as a waist of money if I bought one. The battery life running that dual core processor is what I would like to see confirmed and not "assumed".
As much as I'd like one of these, I won't buy one until Samsung has real customer service and actually releases a GPS fix. We'll see what happens this month. Hopefully Samsung comes through so I can continue supporting them.
New processors generally come with more advanced power saving features, so the battery life might even be better
Good to see progress,
But is there really anything on Android market that utilises all that power?!
Theres scarcely any serious 3d games and not that much dev. work.
boodies said:
New processors generally come with more advanced power saving features, so the battery life might even be better
Click to expand...
Click to collapse
I think power should be the main focus, not more power, unless they can accomplish both.. then bring on the power...

Do you overclock your N7?

Do you?
Do you keep it overckocked for a longer period, permanently, or just when/while you need it? How much (exact frequencies would be cool) I'm thinking of OCing mine (both CPU and GPU) since some games like NOVA 3 lag on occasions but not sure how safe/advisable it is.
Sent from my Nexus 7 using Tapatalk HD
I don't think it's needed. I've heard that OC won't help much with gaming, but you can definitely try
I don't yet - I might later. My N7 is still less than a month old.
The device manufacturers (e.g. Asus in this case) have motivations to "not leave anything on the table" when it comes to performance. So, you have to ask yourself - why would they purposely configure things to go slowly?
After all, they need to compete with other handset/tablet manufacturers, who are each in turn free to go out and buy the exact same Tegra SoC (processor) from Nvidia.
At the same time, they know that they will manufacture millions of units, and they want to hold down their product outgoing defect levels and in-the-field product reliability problems to an acceptable level. If they don't keep malfunctions and product infant mortality down to a fraction of a percent, they will suffer huge brand name erosion problems. And that will affect not only sales of the current product, but future products too.
That means that they have to choose a conservative set of operating points which will work for 99+ % of all customer units manufactured across all temperature, voltage, and clock speed ranges. (BTW, Note that Asus didn't write the kernel EDP & thermal protection code - Nvidia did; that suggests that all the device manufacturers take their operating envelope from Nvidia; they really don't even want to know where Nvidia got their numbers)
Some folks take this to mean that the vast majority of units sold can operate safely at higher speeds, higher temperatures, or lower voltages, given that the "as shipped" configuration will allow "weak" or "slow" units to operate correctly.
But look, it's not as if amateurs - hacking kernels in their spare time - have better informed opinions or data about what will work or won't work well across all units. Simply put, they don't know what the statistical test properties of processors coming from the foundry are - and certainly can't tell you what the results will be for an individual unit. They are usually smart folks - but operating completely in the dark in regards to those matters.
About the only thing which can be said in a general way is that as you progressively increase the clock speed, or progressively weaken the thermal regulation, or progressively decrease the cpu core voltage stepping, your chances of having a problem with any given unit (yours) increase. A "problem" might be (1) logic errors which lead to immediate system crashes or hangs, (2) logic errors (in data paths) that lead to data corruption without a crash or (3) permanent hardware failure (usually because of thermal excursions).
Is that "safe"?
Depends on your definition of "safe". If you only use the device for entertainment purposes, "safe" might mean "the hardware won't burn up in the next 2-3 years". Look over in any of the kernel threads - you'll see folks who are not too alarmed about their device freezing or spontaneously rebooting. (They don't like it, but it doesn't stop them from flashing dev kernels).
If you are using the device for work or professional purposes - for instance generating or editing work product - then "safe" might mean "my files on the device or files transiting to and from the cloud won't get corrupted", or "I don't want a spontaneous kernel crash of the device to cascade into a bricked device and unrecoverable files". For this person, the risks are quite a bit higher.
No doubt some tool will come in here and say "I've been overclocking to X Ghz for months now without a problem!" - as if that were somehow a proof of how somebody else's device will behave. It may well be completely true - but a demonstration on a single device says absolutely nothing about how someone else's device will behave. Even Nvidia can't do that.
There's a lot of pretty wild stuff going on in some of the dev kernels. The data that exists as a form of positive validation for these kernels is a handful of people saying "my device didn't crash". That's pretty far removed from the rigorous testing performed by Nvidia (98+% fault path coverage on statistically significant samples of devices over temperature, voltage, and frequency on multi-million dollar test equipment.)
good luck!
PS My phone has it's Fmax OC'ed by 40% from the factory value for more than 2 years. That's not a proof of anything really - just to point out that I'm not anti-OC'ing. Just trying to say - nobody can provide you any assurances that things will go swimmingly on your device at a given operating point. It's up to you to decide whether you should regard it as "risky".
Wow thanks for your educational response, I learned something. Great post! I will see if I will over clock it or not since I can play with no problems at all, it is just that it hics up when there is too much stuff around. Thanks again!
Sent from my Nexus 7 using Tapatalk HD
With the proper kernel its really not needed. Havent really seen any difference,aside from benchmark scores(which can be achieved without oc'ing)
Sent from my Nexus 7 using XDA Premium HD app
Yes, I run mine at 1.6 peak.
I've come to the Android world from the iOS world - the world of the iPhone, the iPad, etc.
One thing they're all brilliant at is responsive UI. The UI, when you tap it, responds. Android, prior to 4.1, didn't.
Android, with 4.1 and 4.2, does. Mostly.
You can still do better. I'm running an undervolted, overclocked M-Kernel, with TouchDemand governor, pushing to 2 G-cores on touch events.
It's nice and buttery, and renders complex PDF files far faster than stock when the cores peak at 1.6.
I can't run sustained at 1.6 under full load - it thermal throttles with 4 cores at 100% load. But I can get the peak performance for burst demands like page rendering, and I'm still quite efficient on battery.
There's no downside to running at higher frequencies as long as you're below stock voltages. Less heat, more performance.
If you start pushing the voltages past spec, yeah, you're likely into "shortening the lifespan." But if you can clock it up, and keep the voltages less than the stock kernel, there's really not much downside. And the upside is improved page rendering, improved PDF rendering, etc.
Gaming performance isn't boosted that much as most games aren't CPU bound. That said, I don't game. So... *shrug*.
Bitweasil said:
I can't run sustained at 1.6 under full load - it thermal throttles with 4 cores at 100% load.
Click to expand...
Click to collapse
@Bitweasil
Kinda curious about something (OP, allow me a slight thread-jack!).
in an adb shell, run this loop:
# cd /sys/kernel/debug/tegra_thermal
# while [ 1 ] ; do
> sleep 1
> cat temp_tj
> done
and then run your "full load".
What temperature rise and peak temperature do you see? Are you really hitting the 95C throttle, or are you using a kernel where that is altered?
I can generate (w/ a mutli-threaded native proggy, 6 threads running tight integer loops) only about a 25C rise, and since the "TJ" in mine idles around 40C, I get nowhere near the default throttle temp. But I am using a stock kernel, so it immediately backs off to 1.2 Ghz when multicore comes on line.
Same sort of thing with Antutu or OpenGL benchmark suites (the latter of which runs for 12 minutes) - I barely crack 60C with the stock kernel.
?
bftb0
The kernel I'm using throttles around 70C.
I can't hit that at 1200 or 1300 - just above that I can exceed the temps.
I certainly haven't seen 95C.
M-Kernel throttles down to 1400 above 70C, which will occasionally get above 70C at 1400, but not by much.
Bitweasil said:
The kernel I'm using throttles around 70C.
I can't hit that at 1200 or 1300 - just above that I can exceed the temps.
I certainly haven't seen 95C.
M-Kernel throttles down to 1400 above 70C, which will occasionally get above 70C at 1400, but not by much.
Click to expand...
Click to collapse
Thanks. Any particular workload that does this, or is the throttle pretty easy to hit with arbitrary long-running loads?
Odp: Do you overclock your N7?
I'll never OC a quadcore phone/tablet, I'm not stupid. This is enough for me.
Sent from my BMW E32 using XDA App
I've over clocked my phone, but not my N7. I've got a Galaxy Ace with a single core 800MHz processor OC'd to 900+. The N7 with its quad core 1.3GHz is more than enough for doing what I need it to do. Using franco.Kernel and everything is smooth and lag-free. No need for me to overclock
Sent From My Awesome AOSPA3.+/franco.Kernel Powered Nexus 7 With XDA Premium
Impossible to do so can't even get root but did manage to unlock the bootloader
Sent from my Nexus 7 using xda app-developers app
CuttyCZ said:
I don't think it's needed. I've heard that OC won't help much with gaming, but you can definitely try
Click to expand...
Click to collapse
I'm not a big OC'er, but I do see a difference in some games when I OC the GPU. It really depends on the game and what is the performance bottleneck. If the app is not Kernel bound than an OC won't make much difference. Must games are I/O and GPU bound.
Sent from my N7 using XDA Premium
Dirty AOKP 3.5 <&> m-kernel+ a34(t.10)
I've overclocked all of my devices since my first HTC hero. I really don't see a big deal with hardware life.
I know that this n7 runs games better at 1.6ghz than at 1.3ghz.
First thing I do when I get a new device is swap recovery and install aokp with the latest and greatest development kernel. Isn't that why all this great development exists? For us to make our devices better and faster? I think so. I'd recommend aokp and m-kernel to every nexus 7 owner. I wish more people would try non-stock.
scottx . said:
I've overclocked all of my devices since my first HTC hero. I really don't see a big deal with hardware life.
I know that this n7 runs games better at 1.6ghz than at 1.3ghz.
First thing I do when I get a new device is swap recovery and install aokp with the latest and greatest development kernel. Isn't that why all this great development exists? For us to make our devices better and faster? I think so. I'd recommend aokp and m-kernel to every nexus 7 owner. I wish more people would try non-stock.
Click to expand...
Click to collapse
Do you mean the pub builds of AOKP? Or Dirty AOKP
Ty
bftb0 said:
Thanks. Any particular workload that does this, or is the throttle pretty easy to hit with arbitrary long-running loads?
Click to expand...
Click to collapse
Stability Test will do it reliably. Other workloads don't tend to run long enough to trigger it that I've seen.
And why is a quadcore magically "not to be overclocked"? Single threaded performance is still a major bottleneck.
Bitweasil said:
Stability Test will do it reliably. Other workloads don't tend to run long enough to trigger it that I've seen.
And why is a quadcore magically "not to be overclocked"? Single threaded performance is still a major bottleneck.
Click to expand...
Click to collapse
Hi Bitweasil,
I fooled around a little more with my horrid little threaded cpu-blaster code. Combined simultaneously with something gpu-intensive such as the OpenGL ES benchmark (which runs for 10-12 minutes), I observed peak temps (Tj) of about 83C with the stock kernel. That's a ridiculous load, though. I can go back and repeat the test, but from 40C it probably takes several minutes to get there. No complaints about anything in the kernel logs other than the EDP down-clocking, but that happens just as soon as the second cpu comes on line, irrespective of temperature. With either of the CPU-only or GPU-only stressors, the highest I saw was a little over 70C. (But, I don't live in the tropics!)
To your question - I don't think there is much risk of immediate hardware damage, so long as bugs don't creep into throttling code, or kernel bugs don't cause a flaw that prevents the throttling or down-clocking code from being serviced while the device is running in a "performance" condition. And long-term reliability problems will be no worse if the cumulative temperature excursions of the device are not higher than what than what they would be using stock configurations.
The reason that core voltages are stepped up at higher clock rates (& more cores online) is to preserve both logic and timing closure margins across *all possible paths* in the processor. More cores running means that the power rails inside the SoC package are noisier - so logic levels are a bit more uncertain, and faster clocking means there is less time available per clock for logic levels to stabilize before data gets latched.
Well, Nvidia has reasons for setting their envelope the way they do - not because of device damage considerations, but because they expect to have a pretty small fraction of devices that will experience timing faults *anywhere along millions of logic paths* under all reasonable operating conditions. Reducing the margin, whether by undervolting at high frequencies, or increasing max frequencies, or allowing more cores to run at peak frequencies will certainly increase the fraction of devices that experience logic failures along at least one path (out of millions!). Whether or not OC'ing will work correctly on an individual device can not be predicted in advance; the only thing that Nvidia can estimate is a statistical quantity - about what percent of devices will experience logic faults under a given operating conditon.
Different users will have different tolerance for faults. A gamer might have very high tolerance for random reboots, lockups, file system corruption, et cetera. Different story if you are composing a long email to your boss under deadline and your unit suddenly turns upside down.
No doubt there (theoretically) exists an overclocking implementation where 50% of all devices would have a logic failure within (say) 1 day of operation. That kind of situation would be readily detected in a small number of forum reports. But what about if it were a 95%/5% situation? One out of twenty dudes report a problem, and it is dismissed with some crazy recommendation such as "have you tried re-flashing your ROM?". And fault probability accumulates with time, especially when the testing loads have very poor path coverage. 5% failure over one day will be higher over a 30 day period - potentially much higher.
That's the crux of the matter. Processor companies spend as much as 50% of their per-device engineering budgets on test development. In some cases they actually design & build a second companion processor (that rivals the complexity of the first!) whose only function is to act as a test engine for the processor that will be shipped. Achieving decent test coverage is a non-trivial problem, and it is generally attacked with extremely disciplined testing over temperature/voltage/frequency with statistically significant numbers of devices - using test-vector sets (& internal test generators) that are known to provide a high level of path coverage. The data that comes from random ad-hoc reports on forums from dudes running random applications in an undisciplined way on their OC'ed units is simply not comparable. (Even "stressor" apps have very poor path coverage).
But, as I said, different folks have different tolerance for risk. Random data corruption is acceptable if the unit in question has nothing on it of value.
I poked my head in the M-kernel thread the other day; I thought I saw a reference to "two units fried" (possibly even one belonging to the dev?). I assume you are following that thread ... did I misinterpret that?
cheers
I don't disagree.
But, I'd argue that the stock speeds/voltages/etc are designed for the 120% case - they're supposed to work for about 120% of shipped chips. In other words, regardless of conditions, the stock clocks/voltages need to be reliable, with a nice margin on top.
Statistically, most of the chips will be much better than this, and that's the headroom overclocking plays in.
I totally agree that you eventually will get some logic errors, somewhere, at some point. But there's a lot of headroom in most devices/chips before you get to that point.
My use cases are heavily bursty. I'll do complex PDF rendering on the CPU for a second or two, then it goes back to sleep while I read the page. For this type of use, I'm quite comfortable with having pushed clocks hard. For sustained gaming, I'd run it lower, though I don't really game.

Will it be possible to have 2 cpus ?

Will it be possible to have 2 cpus on the Ara.. It will be a beast if it could .. ( p.s. sorry if i have mistakes! )
51r said:
Will it be possible to have 2 cpus on the Ara.. It will be a beast if it could .. ( p.s. sorry if i have mistakes! )
Click to expand...
Click to collapse
I highly doubt it, I reckon the device would heat up so much and consume so much battery. Plus I think it will take a much longer time for two mobile cpu's to play nice with each other
I doubt it. It'd be cool if you could though but I still see no point as to why.
Yes, you could. The problem is that Android is not written to really use those two processors (its only recently getting support to use dual cores, much less quad) so it would just be a waste of energy and space.
good post
riahc3 said:
Yes, you could. The problem is that Android is not written to really use those two processors (its only recently getting support to use dual cores, much less quad) so it would just be a waste of energy and space.
Click to expand...
Click to collapse
I was going to suggest dual core. You beat me to it. Your post is good info; just like not jumping on a 64 bit bandwagon before devices have 8 or more GB of ram [not storage].
im sure it would be great to have two cpus but i feel like all that power would go to waste im sure it could bring more development but still what are you going to do with two cpu's at the current clock speeds we have now? the newest kindle fire is more powerful than my computer im sure quad cores are quite enough for phones cant believe they make octacores its a huge waste.
Dual processors in Project Ara devices.
Actually, from a functional standpoint, I see no reason to have dual CPUs. Android can't make use of a dual processor system, and if it could, what benefit would it provide in real time?
The system as it is, is too inefficient to handle the CPU commands, support the current demand of a dual CPU device.
With a dual CPU device, you also need to design additional power control regulation and filtering, additional battery support and ASIC devices to control the processor when demands are not being called upon, this adds a lot to the base architecture, and not really a financial benefit for a healthy profit margin. When you have finite board real estate for each individual module, you can't simply 'design-in' additional power control circuitry and maintain the same, or similar board dimensions, something has to give.
If we had everything we desired in a single device, I guarantess that device would be dimensionally unusable, the form factor would grow, costs would multiply, and with every feature added as 'standard', you would need to drag around a automotive-sized battery to operate all the options and features.
Personally, I prefer a robust Rf section, and then a modular antenna system that uses PIN diodes so I can select internal or external antennas if I desire. Next, I would like to have Bluetooth access to the entire phone system and file structure, so I can, in essence, 'clone' my phones parameters in a lab environment for testing applications and RF system compatibility.
The RF module should come standard with ALL known and used modulations, bands and coding, such as CDMA, GSM, WCDMA2000, TDMA, CQPSK and even 450 bands for Euro networks. Heck, I'd even like to see P25 thrown in for good measure, along with LTR and EDACS and OpenSky! ( I work with a LOT of RF radio networks, including trunked systems, so of course, I would love to have them all at my fingertips.
Off-Network communications is always a desire when you are in areas not served by cell sites, and point to point comms. is always useful.
Instead of sacrificing capabilities, how about increasing usefulness instead?
dual, quad, octa or more cpu cores are fit in one module i guess and yes android can't make use of dual cpu like servers.
2 cpus 1 phone
Sent from my Nexus 5 using Tapatalk
Maybe utilize a 4.0 GHz overclocked x64 cpu?
Since Google just helped develop a new CPU for Ara this may be possible now
I could see 2 cpus as like an either or situation. Heavy load. Use the one for performance. Screen off or battery saving mode. Use a decent single core thats geared towards battery life.
The thing about Project Ara is the aim seems to be to bring smartphones to the level of customization that we see in PCs. We could very well see some manufacturers who get on board with Ara eventually make SoCs that support dual processors if they feel there is a demand for it. Another interesting thought is if there comes about a project where we could design our own SoCs. Technically it's already possible if you are a hardware developer. I looked into what it would take to do it once, and from what I found it looks like you have to be a hardware developer, own your own company, and form a partnership with a chipset maker(I.E. Intel).
Current apps don't even use all 4 cores properly let alone adding a second cpu
Sent from my GT-N7100 using XDA Premium 4 mobile app
Perhaps software in the system settings could detect the second cpu and allow you to allocate more/less power to separate processes and assign different apps to different cpus.
Sent from my GT-S7560M using XDA Free mobile app
I think that 2 cores is possible. 2 CPU depends on whether android can run it
------------------------------------------------
Projectaratalk.com - a forum for google project ara users and developers
Since the Ara use Tegra x1 ,there's a great chance it has 2 cores.
Imagine how powerfull this phone will get in 1-2 years .. :thumbup::thumbup:
Sent from my GT-I8730 using XDA Free mobile app

[Q] Why do Android Wear watches has duch powerful SoC?

Hi,
I'm pretty curious why all the current Android Wear devices seem to have such powerful hardware built in.
As far as I can tell, almost all the processing is done on the phone, so the SoC should not need to be so fast and power hungry.
Any ideas on why this is?
My Pebble has about 80Mhz single-core Processor (if I read that correctly) and can do many of the things the Android Wear devices can. Of course this is Apples and Oranges, but I think that even with Touchscreen and everything the processing power is unneccesarily overpowered...
Thanks.
Well, Moto 360 has a less powerful CPU. I think the reason is because these companies don't have the ability to design their own custom chips, other than maybe Samsung (who maybe just hasn't had time yet), so they need to use off the shelf chips that already have the drivers and kernels to run Android.
Older processors (like what's in the moto 360) are larger and more power hungry. Newer SoCs like the Snapdragon 400 used in the G Watch and Gear Live have higher-clocked, more powerful cores, but are manufactured with a smaller 28nm process. Smaller means more performance-per-watt. They disable all of the cores except one, which decreases power draw even more. Underclocking the one remaining core then saves even more power, all the while still performing even better than the old chip.
I seriously think Motorola just had a truck load of those TI processors sitting in a warehouse somewhere and was trying to figure out a way to make some money off them. Here's hoping they get rid of them all before the next hardware revision.
CommanderROR said:
Hi,
I'm pretty curious why all the current Android Wear devices seem to have such powerful hardware built in.
As far as I can tell, almost all the processing is done on the phone, so the SoC should not need to be so fast and power hungry.
Any ideas on why this is?
My Pebble has about 80Mhz single-core Processor (if I read that correctly) and can do many of the things the Android Wear devices can. Of course this is Apples and Oranges, but I think that even with Touchscreen and everything the processing power is unneccesarily overpowered...
Thanks.
Click to expand...
Click to collapse
Running lower freq on a powerful cpu is more efficient than running a higher freq on a less powerful cpu
Like what another member have posted, there are perhaps more access to the current stockpile of CPUs which is cheaper than redesigning a new CPU or ordering an out-of-stock CPU (which is costlier)
or we do need to consider the possibility that there is more room for app developers to play with, without having the CPU as a limiting factor.
gtg465x said:
Well, Moto 360 has a less powerful CPU. I think the reason is because these companies don't have the ability to design their own custom chips, other than maybe Samsung (who maybe just hasn't had time yet), so they need to use off the shelf chips that already have the drivers and kernels to run Android.
Click to expand...
Click to collapse
The processors in other watch are not customer chips, Motorola just decided to say we rather save $10 in building the Moto-360 than letting users have a watch that is more responsive and better on battery life.
johnus said:
Older processors (like what's in the moto 360) are larger and more power hungry. Newer SoCs like the Snapdragon 400 used in the G Watch and Gear Live have higher-clocked, more powerful cores, but are manufactured with a smaller 28nm process. Smaller means more performance-per-watt. They disable all of the cores except one, which decreases power draw even more. Underclocking the one remaining core then saves even more power, all the while still performing even better than the old chip.
I seriously think Motorola just had a truck load of those TI processors sitting in a warehouse somewhere and was trying to figure out a way to make some money off them. Here's hoping they get rid of them all before the next hardware revision.
Click to expand...
Click to collapse
This is the best reply thus far. The only other thing I would add on is that using the older processor has already been proven to lower battery life on a SmartWatch. This article is a great example: http://arstechnica.com/gadgets/2014/09/moto-360-review-beautiful-outside-ugly-inside/2/
You'll see there that the Moto 360 has similar overall performance, and lower battery life in the standardized tests he was able to create. This also takes into account his "screen-off" tests with battery life, leading the reviewer to believe the SoC was the culprit.
Thanks.

CPU core assigning: modem, gpu, etc.

With the availability of rooting the SM-N900V along with flashing Custom ROMs and Kernels, has anyone attempted to assign specific hardware tasks to the CPU cores?
The Snapdragon processor uses asynchronous core processing to allow tasks to be ran on any or all cores, which is great for general use. However, with viewing how battery saving and performance modes control ramping, it is my viewpoint that having a core focus on a specific function, (ie computes for the modem or gpu) instead of anything and everything possible, would increase RAM efficiency as well as increasing stability. The cores, in theory, would ramp indepently of each other, instead of having a single core being maxed out before it triggers others to come online.
I know that Linux systems have the isolcpu command to assign processes this way, but as involved as Android is, as well as being a POSIX system, it may be ridiculous to try to assign each and every process this way. Perhaps even more so with user apps being added and removed frequently.
I would love to hear insight as to whether this idea has any benefit to implementing, as well as reasons why it might be worse than the methods currently in place.

Categories

Resources