How does the Snap 808 work with Motorola's Mobile Computing System? - X Style (Pure) Q&A, Help & Troubleshooting

Like the title says, how do these two systems work together? Are the 6 Snapdragon 808 cores completely separate from the Natural Language and Contextual Computing Processors? Or, does Motorola use two of the S808 cores? I don't think they'd do it the latter way, but I'm curious nonetheless.

Related

[OFFTOPIC] Ipad2 Dual Core CPU made by Samsung

Apple's A5 CPU in iPad 2 confirms manufacturing by Samsung
source: http://www.appleinsider.com/article...ipad_2_confirms_manufacturing_by_samsung.html
That was quite a funny thing to read for the morning breakfast
Ipad2 Dual core CPUs are made by Samsung.
In a way we can expect really good CPUs for our next phone upgrade from Samsung
I wouldn't be surprised if the CPU used on the upcoming SGS2 is the same dual core CPU as the one found in Ipad2
The same was the case in the iPhone 4, original iPad, and the Samsung Galaxy S series of phones.
I'm actually kind of curious what kind of agreements the two have now. The A4/Hummingbird chip was originally created by Intrinsity and Samsung, then Apple acquired Intrinsity. I they probably had shared IP the whole time and are continuing the relationship to bring the same basic chip design to both Apple and Samsung. The chips aren't identical, but they are pretty close. The CPU is the same I believe, but being that it's a SOC, the GPUs and other components aren't necessarily the same.
Are there any detailed information? I wonder if iPad 2 uses Exynos...
d3sm0nd said:
Are there any detailed information? I wonder if iPad 2 uses Exynos...
Click to expand...
Click to collapse
I doubt it. Exynos is the name of the SoC. They are likely using a similar Cortex A9 CPU, but the SoC is likely customized depending on the application. Apple would have had little reason to acquire Intrinsity if they were going to use Samsung's whole package. That's how the A4 and Hummingbird were.
To add a little further proof, Apple is said to be using the SGX543MP GPU in the A5, while we know that the Orion (Exynos 4210) SoC that the SGS 2 will be using is using the Mali 400 GPU.
I'm not sure what Apple's intentions are exactly. They may just be interested in customizing their packages to their specific needs, but get the major parts (CPU, GPU, etc) built by someone else, or they may be in a learning process to completely design their own chips in the future. They certainly have the money to do something like that, but I don't know that they have the interest.
At least that's how I see it all. If anyone else has further insight please let us know.
The SGX543MP4 (used in the sony NGP) is wayyyyyyy better than the mali 400, but you get what you get
Now, the interesting part about the PowerVR is that it is a true MIMD [Multiple Instruction-Multiple Data http://en.wikipedia.org/wiki/MIMD ] architecture. In their press releases, ImgTech is bragging about the capabilities of the "GP-GPU", but even if we take a look at the specifications with the cold head, a lot of surprises are in store. The multi-core design is available in dual, quad, octal and sedec-core variants [SGX543MP2, SGX543MP4, SGX543MP8, SGX543MP16], and they're by no means slouches.
For instance, a quad-core version SGX543MP4 at only 200 MHz frequency delivers 133 million polygons per second and offers fill-rate of four billion pixels per second [4GPixel/s], in the range of GeForce 8600 cards. For that matter, 4GPixel/s runs 40nm GeForce GT210 [2.5 GPixel/s] into the ground. Given that GeForce GT210 runs at 589 MHz for the core and 1.4 GHz for shaders. Since PowerVR SGX543 targets handheld devices, there is no saying what the performance plateau is.
An eight core SGX543MP8 at 200 MHz delivers 266 million polygons and eight billion pixels per second, while faster clocked version, for instance, at 400 MHz would deliver 532 million polygons and 16 billion pixels per second. 16 billion pixels per second equal GeForce GTX 260-216, for instance.
After analyzing the performance at hand, it is no wonder that Sony chose to go with PowerVR for the next-generation PlayStation Portable. While the exact details of the SoC are still in question, our take is that Sony could go with quad-core setup at 400MHz [8GPixel/s], paired with a dual-core CPU based on ARM Cortex architecture. This would put Sony direct in line against Tegra-powered Nintendo DS2, PowerVR-based Apple's iPhone 4G and Palm Pre2.
Click to expand...
Click to collapse
ryude said:
The SGX543MP4 (used in the sony NGP) is wayyyyyyy better than the mali 400, but you get what you get
Click to expand...
Click to collapse
The source of this is information is what exactly...?
martino2k6 said:
The source of this is information is what exactly...?
Click to expand...
Click to collapse
The mali 400 specs and performance figures have already been revealed, as well as the SGX543MP4. Benchmarks also favor the PowerVR.
Strange, so I guess that this disproves the other articles that have stated that Apple has had the Taiwanese company TSMC develop the chips for them.
Sent from my Nexus S
Carne_Asada_Fries said:
Strange, so I guess that this disproves the other articles that have stated that Apple has had the Taiwanese company TSMC develop the chips for them.
Sent from my Nexus S
Click to expand...
Click to collapse
The proof is solid and indeed disproves those other articles.
d3sm0nd said:
Are there any detailed information? I wonder if iPad 2 uses Exynos...
Click to expand...
Click to collapse
The GPU is different in Ipad 2, Ipad 2 has PowerVR SGX543MP2 (I think MP2 means 2 cores) according to Anandtech.
http://www.anandtech.com/Show/Index...rmance-explored-powervr-sgx543mp2-benchmarked
ryude said:
The mali 400 specs and performance figures have already been revealed, as well as the SGX543MP4. Benchmarks also favor the PowerVR.
Click to expand...
Click to collapse
iPad has the MP2 variant, which has two cores. The Mali-400 has 4 cores. I mean, this doesn't mean much but personally I think it's still in the air until someone does proper benchmarks with optimised drivers on a final release model.
martino2k6 said:
iPad has the MP2 variant, which has two cores. The Mali-400 has 4 cores. I mean, this doesn't mean much but personally I think it's still in the air until someone does proper benchmarks with optimised drivers on a final release model.
Click to expand...
Click to collapse
I'll definitely be interested since I just got the iPad 2 and tentatively plan on getting the SGS2. Biggest thing about Android though is that it's so hard to get apps that actually utilize the GPU to it's fullest extent. Apps don't get updated for one top of the line phone while most can't handle it, so in that sense I think we'll see better performance out of the iPad 2. It'll be interesting to see if the Tegra games run on the SGS2 and if they are optimized enough to make good use out of the GPU.
Wouldn't it be possible, with an ipad that is jailbroken to allow dual booting into android since the processor will match that of samsungs mobiles? Generally doesn't the Chooser/firmware discrepancy usually disallow this? If this gap is now filled it would seem doable.
Sent from my SAMSUNG-SGH-I897 using XDA App
crossfire2500 said:
Wouldn't it be possible, with an ipad that is jailbroken to allow dual booting into android since the processor will match that of samsungs mobiles? Generally doesn't the Chooser/firmware discrepancy usually disallow this? If this gap is now filled it would seem doable.
Sent from my SAMSUNG-SGH-I897 using XDA App
Click to expand...
Click to collapse
And why would you want to do that? People buy iDevices for the UX which iOS gives, mainly the multitude of apps and ease of use that it provides. Furthermore, Steve Jobs would chop your head off...
crossfire2500 said:
Wouldn't it be possible, with an ipad that is jailbroken to allow dual booting into android since the processor will match that of samsungs mobiles? Generally doesn't the Chooser/firmware discrepancy usually disallow this? If this gap is now filled it would seem doable.
Sent from my SAMSUNG-SGH-I897 using XDA App
Click to expand...
Click to collapse
The CPU is probably the easiest part. As long as you're an ARM CPU, you can compile support for it. It's the drivers for every other piece of hardware that would be important.

Galaxy S III Processor Information

Disclaimer:
I make no assertion of fact on any statement I make except where repeated from one of the official linked to documents. If it's in this thread and you can't find it in an official document, feel free to post your corrections complete with relevant link and the OP can be updated to reflect the most correct information. By no means am I the subject matter expert. I am simply a device nerd that loves to read and absorb information on such things and share them with you. The objective of this thread is to inform, not berate, dis-credit, or otherwise talk trash about someone else's choice. Take that to a PM or another thread please.
There is a LOT of misconception in the community over what hardware is the more capable kit. They are not the same. Therefore comparing them in such a way can be difficult at best. The Ti White Sheet speaks to the many aspects of attempting to do such a thing. It is no small undertaking. Therefore I ask you trust their data before my opinion. However, I felt it necessary to have something resembling a one-stop thread to go to when you are wondering about how the hardware differs between the two devices.
One thing you won't see me doing is using pointless synthetic benchmarks to justify a purchase or position. I use my device heavily but not for games. Web browsing, multiple email, etc......I use LTE heavily. A device can get a bajillion points in but that doesn't make me any more productive. Those into gaming will probably be more comfortable with the EU Exynos Quad, I'll just say that up front....it has a stronger GPU, but this isn't about that. It would be nice to keep this thread about the technology, not synthetic benchmark scores.
Dictionary of Terms (within thread scope):
SGSIII: Samsung Galaxy S 3 smartphone, variant notwithstanding
Samsung: manufacturer, proprietor of the Galaxy S III smartphone. Also responsible for designing the Exynos cpu used in the International variant of the SGSIII.
ARM: Processor Intellectual Property Company, they essentially own the IP rights to the ARM architecture. The ARMv7 architecture is what many processors are based upon at the root, this includes the Exynos by Samsung and the Krait S4 by Qualcomm, as used in the SGSIII as well as many others. It's like the basic foundation with the A9 and A15 feature sets being "options" that Samsung and Qualcomm add on.
Qualcomm: Like Samsung, they are a manufacturer of processors, their contribution here is the S4 Krait cpu used in the US/Canadian market SGSIII smartphone.
CPU: processor, central processing unit, it's the number crunching heart of your phone, we are interested in two here, Samsung's Exynos and Qualcomm's Krait.
As most everyone knows by now, the EU and US variants of the SGSIII come with two different cpu's in them. The EU has the Samsung Exynos, the US the Qualcomm S4 Krait. One major reason if not the only reason I am aware of is the inability of Exynos to be compatible with LTE radio hardware. Qualcomm's S4 Krait however has the radio built into the package. It's an all in one design where Exynos is a discreet cpu and has to depend on secondary hardware for network connectivity. Obviously there are power implications any time you add additional hardware because of redundancy and typical losses.
However the scope of this thread is to point out some differences between the two very different cpu's so that you, the consumer, can make an educated decision based on more than a popularity contest or the "moar corez is bettar!" stance.
Anyone who is into computers fairly well knows the "core counting" as a determination of performance is risky at best. Just as with the megahertz wars of the 1990's....hopefully by now you all know not every 2Ghz CPU is the same, and not every CPU core is the same. You cannot expect an Intel 2Ghz CPU to perform the same as an AMD 2Ghz CPU. It's all about architecture.
Architecture for the purpose of this thread is limited to the ARMv7 architecture and more specifically the A9 and A15 subsets of the architecture. Each architecture supports certain features and instruction sets. Additionally the internal physical parts of the core vary from one architecture to the next.
A9 is older technology in general while A15 is much newer. Exynos is A9 based, Krait S4 is A15 based. Lets look at the differences.
When looking at the two, one must understand that some of the documentation available is comparing what was available at the time they were written. In most cases A9 info is based on the 40nm manufacturing process. Samsung's Exynos is built using newer 32nm HKMG manufacturing processes. Qualcomm S4 Krait is built on a diferent smaller TSMC 28nm manufacturing process. Generally speaking, the smaller the process, the less heat and power loss. There is also power leakage, etc......not going to get into it because frankly, I haven't read enough to speak to it much. But don't take my word for it.
There is a lot of information out there but here are a few links to good information.
Exynos 32nm Process Info
Qualcomm S4 Krait Architecture Explained
Ti A15 White Papers
ARM Cortex A9 Info
ARM Cortex A15 Info
Samsung Exynos 4412 Whitesheet
Exploring the Design of the A15 Processor
I could link you to all sorts of web benchmarks and such, but to be honest, none of them are really complete and I have not yet found one that can really give a unbiased and apples to apples comparison. As mentioned previously most of them will compare the S4 Krait development hardware to the older 40nm Samsung Exynos hardware......which really doesn't represent what is in the SGSIII smartphones.
Now a few take aways that to me stood out from my own research. If you are unable to read someone's opinion without getting upset please don't read on from here.
The Exynos EU variant that does not support LTE is on paper going to use more power and create more heat due to it simply needing to rely on additional hardware for it's various functions where the S4 Krait has the radio built in. This remains to be seen but battery life would be the biggest implication here. Although Samsung's latest 32nm HKMG process certainly goes a long way towards leveling the playing field.
The Exynos variant is built on older A9 core technology and when comparing feature sets, does not support things such as virtualization. Do you need VT for your phone? Only if the devs create an application for it, but I believe the ability to dual boot different OS'es is much easier done with VT available.
In contrast the S4 Krait core does support this feature. I would like to see about dual booting Windows Phone 8 and Android and I hope having the hardware support and additional ram (EU version has 1GB ram, US has 2GB ram) will help in this area. Actual VT implementation may be limited in usefulness, to be seen.
The S4 Krait/Adreno 225 package supports DirectX 9.3, a requirement for Windows RT/Windows 8 Phone(not sure if required for Phone version). In contrast Exynos Quad/Mali400 does not support DirectX 9.3 and may or may not be able to run Windows RT/Windows 8 Phone as a result. From what I understand Windows Phone 8 may be an option.
Code compiled for the A9 derived Exynos has been around for quite some time as opposed to A15 feature code. I really don't know much about this myself, but I would expect that the A15 based solution is going to have much longer legs under it since it supports everything the Exynos Quad does plus some. My expectation is that with time the code will be optimized for the newer A15 architecture better where the Exynos A9 is likely much more mature already. It could be we see a shift in performance as the code catches up to the hardware capabilities. Our wonderful devs will be a huge influence on this and where it goes. I know I would want to develop to take advantage of the A15 feature sets because they are in-addition-to the A9 feature sets that they both support.
My hope is that anyone who is trying to make a good purchasing decision is doing so with some intent. Going with a EU SGSIII when you want to take advantage of LTE data is going to cause you heartache. It cannot and will not work on your LTE network. Likewise, if you live somewhere where LTE doesn't exist or you simply don't care to have that ability, buying the US SGSIII may not be the best choice all things considered. So in some cases you see the CPU might not be the gating item that causes you to choose one way or another.
Todays smartphones are powerful devices. In todays wireless world many times our hardware choice is a two year long commitment, no small thing to some. If you have specific requirements for your handset, you should know you have options. But you should also be able to make an educated decision. The choice is your's, do with it what you will.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
SlimJ87D said:
Click to expand...
Click to collapse
One thing you won't see me doing is using pointless synthetic benchmarks to justify a purchase or position. I use my device heavily but not for games. Web browsing, multiple email, etc......I use LTE heavily. A device can get a bajillion points in <insert your choice of synthetic benchmark> but that doesn't make me any more productive. Those into gaming will probably be more comfortable with the EU Exynos Quad, I'll just say that up front....it has a stronger GPU, but this isn't about that. It would be nice to keep this thread about the technology, not synthetic benchmark scores.
Click to expand...
Click to collapse
This is not a benchmark comparison thread, as simply put in the OP. Please create a synthetic benchmark thread for synthetic benchmark comparisons. Please read the OP before commenting. I was really hoping you were going to offer more technical information to contribute as you seem to be up to date on things. I expected more than a cut and paste "me too" synthetic benchmark from you....congrats, you can now run Antutu faster....
Thanks for info but Qualcomm's architecture is not quite following ARM's blueprint/guideline. They did huge modification on their first AP(Snapdragon 1) to push over 1GHz and it causes low power efficiency/application compatibility/heat issue compared to Sammy's legit 1Ghz Hummingbird. And for some reason Qualcomm decide to improving their mal-functioning architecture(scorpion) instead of throwing it away and their inferiority continues through all scorpion chips regardless of generation. Their only sales point and benefit was one less chip solution, and LTE band chip nowadays.
Personally I don't think S4 based on A15 architecture and it is slower than International note's exynos in many comparing benchmarks/reviews.
Exynos in GS3 is made on 32nm node, which is better than 45nm one in note. I don't think Sammy figured out yet the ideal scheduler that android system and applications to use those four core efficiently, but it will show significant performance boost over coming updates as shown on GS2 case.
Sent from my SAMSUNG-SGH-I717 using xda premium
Radukk said:
Thanks for info but Qualcomm's architecture is not quite following ARM's blueprint/guideline. They did huge modification on their first AP(Snapdragon 1) to push over 1GHz and it causes low power efficiency/application compatibility/heat issue compared to Sammy's legit 1Ghz Hummingbird. And for some reason Qualcomm decide to improving their mal-functioning architecture(scorpion) instead of throwing it away and their inferiority continues through all scorpion chips regardless of generation. Their only sales point and benefit was one less chip solution, and LTE band chip nowadays.
Personally I don't think S4 based on A15 architecture and it is slower than International note's exynos in many comparing benchmarks/reviews.
Exynos in GS3 is made on 32nm node, which is better than 45nm one in note. I don't think Sammy figured out yet the ideal scheduler that android system and applications to use those four core efficiently, but it will show significant performance boost over coming updates as shown on GS2 case.
Sent from my SAMSUNG-SGH-I717 using xda premium
Click to expand...
Click to collapse
The fact both cpu's are modified versions of their ARM derived variants is captured in the OP, as is the fact that most if not all comparisons reference the 40nm Exynos as opposed to the newer 32nm process, also mentioned in the OP.
Thanks
Why would windows environment even matter at this moment?
Isn't MS setting the hardware specs for the ARM version of the devices?
As for LTE compatibility, it's getting released in korean market with LTE and 2GB of RAM supposedly and this was the speculation from the beginning.
spesific discussion of the processors is different to general discussion on comparison.
thread cleaned. please keep to this topic?
jamesnmandy said:
When looking at the two, one must understand that some of the documentation available is comparing what was available at the time they were written. In most cases A9 info is based on the 40nm manufacturing process. Samsung's Exynos is built using newer 32nm HKMG manufacturing processes. Qualcomm S4 Krait is built on a newer smaller 28nm manufacturing process. Generally speaking, the smaller the process, the less heat and power a cpu will generate because of the much denser transistor count. There is also power leakage, etc......not going to get into it because frankly, I haven't read enough to speak to it much.
Software written for the A9 derived Exynos has been around for quite some time as opposed to A15 feature code. I really don't know much about this myself, but I would expect that the A15 based solution is going to have much longer legs under it since it supports everything the Exynos Quad does plus some. My expectation is that with time the code will be optimized for the newer A15 architecture better where the Exynos A9 is likely much more mature already. It could be we see a shift in performance as the code catches up to the hardware capabilities. Our wonderful devs will be a huge influence on this and where it goes. I know I would want to develop to take advantage of the A15 feature sets because they are in-addition-to the A9 feature sets that they both support.
Click to expand...
Click to collapse
First of all, Samsung's 32nm HKMG process is superior and more power efficient than TSMC's 28nm that's being used in Krait right now, even if the feature size is slightly bigger. HKMG overall is absolutely big step in transistor leakage, and on top of that Samsung integrated a lot of low level electrical engineering tricks to lower the power usage as load biasing. The quadcore with last generation architecture is toe-to-toe with the dual-core Kraits in maximum power dissipation if you normalize for battery size as per Swedroid's tests: http://www.swedroid.se/samsung-galaxy-s-iii-recension/#batteri
And the Kraits and A15 are supposed to be much more power efficient in W/MHz, so it just proves how much of a manufacturing advantage Samsung has here.
Secondly, that paragraph about software written for A15 is absolutely hogwash. You don't write software for A15, the compiler translates it into instruction code which the A15 is equal to an A9, i.e. ARMv7-A. The only difference is the SIMD length which is being doubled/quadrupled, again something you don't really program for in most cases. A15 is mostly IPC improvements and efficiency improvements, not a new instruction set to warrant difference in software, else nothing would be compatible.
DX9.3 compliance is senseless in that regard too as until we'll need any of that the new generation of SoCs will be out so I don't know why we're even bringing this up as nobody's going to hack Windows 8 into their Galaxy S3.
I really don't see the point of this thread as half of it is misleading and the other half has no objective.
AndreiLux said:
First of all, Samsung's 32nm HKMG process is superior and more power efficient than TSMC's 28nm that's being used in Krait right now, even if the feature size is slightly bigger. HKMG overall is absolutely big step in transistor leakage, and on top of that Samsung integrated a lot of low level electrical engineering tricks to lower the power usage as load biasing. The quadcore with last generation architecture is toe-to-toe with the dual-core Kraits in maximum power dissipation if you normalize for battery size as per Swedroid's tests: http://www.swedroid.se/samsung-galaxy-s-iii-recension/#batteri
And the Kraits and A15 are supposed to be much more power efficient in W/MHz, so it just proves how much of a manufacturing advantage Samsung has here.
Secondly, that paragraph about software written for A15 is absolutely hogwash. You don't write software for A15, the compiler translates it into instruction code which the A15 is equal to an A9, i.e. ARMv7-A. The only difference is the SIMD length which is being doubled/quadrupled, again something you don't really program for in most cases. A15 is mostly IPC improvements and efficiency improvements, not a new instruction set to warrant difference in software, else nothing would be compatible.
DX9.3 compliance is senseless in that regard too as until we'll need any of that the new generation of SoCs will be out so I don't know why we're even bringing this up as nobody's going to hack Windows 8 into their Galaxy S3.
I really don't see the point of this thread as half of it is misleading and the other half has no objective.
Click to expand...
Click to collapse
So i am happy to make corrections when unbiased data is presented. I will look into some of your claims for myself and update accordingly but as mentioned in the OP if you would like to cite specific sources for any thing, please include links. Thank you for your input. The entire point of the thread is to document the differences because a lot of people seem to be looking at the choice as simply 4 or 2 cores and in similar fashion they gravitate to the bigger number without understanding what they are buying into. Some of your statements claim "hogwash", as mentioned I am learning myself and hope to rid the post of any hogwash asap. I for one will be trying to get Windows 8 Phone to boot on it if possible, I tried to clarify in the OP Windows Phone 8 while Windows 8 RT certainly looks to be a stretch. Thanks
Sent from my DROIDX using xda premium
AndreiLux said:
First of all, Samsung's 32nm HKMG process is superior and more power efficient than TSMC's 28nm that's being used in Krait right now, even if the feature size is slightly bigger. HKMG overall is absolutely big step in transistor leakage, and on top of that Samsung integrated a lot of low level electrical engineering tricks to lower the power usage as load biasing. The quadcore with last generation architecture is toe-to-toe with the dual-core Kraits in maximum power dissipation if you normalize for battery size as per Swedroid's tests: http://www.swedroid.se/samsung-galaxy-s-iii-recension/#batteri
And the Kraits and A15 are supposed to be much more power efficient in W/MHz, so it just proves how much of a manufacturing advantage Samsung has here.
Secondly, that paragraph about software written for A15 is absolutely hogwash. You don't write software for A15, the compiler translates it into instruction code which the A15 is equal to an A9, i.e. ARMv7-A. The only difference is the SIMD length which is being doubled/quadrupled, again something you don't really program for in most cases. A15 is mostly IPC improvements and efficiency improvements, not a new instruction set to warrant difference in software, else nothing would be compatible.
DX9.3 compliance is senseless in that regard too as until we'll need any of that the new generation of SoCs will be out so I don't know why we're even bringing this up as nobody's going to hack Windows 8 into their Galaxy S3.
I really don't see the point of this thread as half of it is misleading and the other half has no objective.
Click to expand...
Click to collapse
Which test on that review shows the 32nm 4412 being more efficient than the 28nm 8260 on the one s?
VT has nothing to do with dualbooting. That only requires a bootloader and of course the operating systems supporting sharing their space.
It might allow with a lot of work to get Android to run under WP8 or WP8 to run under Android in a virtual machine.
The most interesting feature you could achieve with VT is to have two copies of the same operating system running with their own data, cache and storage partitions each. This would allow corporta BYOD to remain more-or-less secure and enforce corporate policies on Exchange clients without requiring the user's private part of the phone to be affected by these restrictions.
However after years of development 3d performance on x86 (desktop) platforms is mediocre at best with imho Microsoft Hyper-V being the current winner.
Additionally you claim that WP8 will work on the Qualcom chip.
This is simply not true: it MIGHT work if a build for that exact SoC is produced (somewhat unlikely).
Since WP8 is closed-source and the hardware is proprietary binary too it won't be portable.
This is due to ARM not being like x86 a platform with extensible capability and plg&play support but rather an embedded system where the software has to be developed for each device individually.
nativestranger said:
Which test on that review shows the 32nm 4412 being more efficient than the 28nm 8260 on the one s?
Click to expand...
Click to collapse
You have the full load test and the temperature, in the link that I posted. Normalize them for battery size, for example to the Asus Padphone (Or the One S for that matter, they similar in their result) at 3.7V*1520mAh = 6.1Wh and the S3 at 3.8V*2100mAh = 7.98Wh >> 30.8% increase. Nomarlize the S3's 196 minutes by that and you get 149 minutes. Take into account how the S3's screen is bigger and higher resolution and the result will be more skewed towards the S3. So basically a four core last generation at full load on all four cores is arguably toe-to-toe in maximum power dissipation to a next-generation dual core. The latter should have been the winner here by a large margin, but it is not. We know it's not due to architectural reasons, so the only thing left is manufacturing. HKMG brings enormous benefits in terms of leakage and here you can see them.
d4fseeker said:
VT has nothing to do with dualbooting. That only requires a bootloader and of course the operating systems supporting sharing their space.
It might allow with a lot of work to get Android to run under WP8 or WP8 to run under Android in a virtual machine.
The most interesting feature you could achieve with VT is to have two copies of the same operating system running with their own data, cache and storage partitions each. This would allow corporta BYOD to remain more-or-less secure and enforce corporate policies on Exchange clients without requiring the user's private part of the phone to be affected by these restrictions.
However after years of development 3d performance on x86 (desktop) platforms is mediocre at best with imho Microsoft Hyper-V being the current winner.
Additionally you claim that WP8 will work on the Qualcom chip.
This is simply not true: it MIGHT work if a build for that exact SoC is produced (somewhat unlikely).
Since WP8 is closed-source and the hardware is proprietary binary too it won't be portable.
This is due to ARM not being like x86 a platform with extensible capability and plg&play support but rather an embedded system where the software has to be developed for each device individually.
Click to expand...
Click to collapse
Changed text to read
From what I understand Windows Phone 8 may be an option.
Click to expand...
Click to collapse
and
Actual VT implementation may be limited in usefulness, to be seen.
Click to expand...
Click to collapse
TSMC is struggling with their 28nm node and failed to bring up yield rate through High-K Metal Gate process, so they announced they will keep 28nm SiON for now. The problem is when node become more dense, voltage leakage rate increases geometrically. HKMG itself reduces that leakage to about 100 times less as stated by Global Foundry and Samsung. That is why 32nm HKMG is way more superior than 28nm SiON. You can easily find related article, PR data and detailed chart about this.
Sent from my SAMSUNG-SGH-I717 using xda premium
Radukk said:
TSMC is struggling with their 28nm node and failed to bring up yield rate through High-K Metal Gate process, so they announced they will keep 28nm SiON for now. The problem is when node become more dense, voltage leakage rate increases geometrically. HKMG itself reduces that leakage to about 100 times less as stated by Global Foundry and Samsung. That is why 32nm HKMG is way more superior than 28nm SiON. You can easily find related article, PR data and detailed chart about this.
Sent from my SAMSUNG-SGH-I717 using xda premium
Click to expand...
Click to collapse
Will update with linkage when i can, kinda busy with work, if you have links to share please do,
Sent from my DROIDX using xda premium
jamesnmandy said:
Will update with linkage when i can, kinda busy with work, if you have links to share please do,
Sent from my DROIDX using xda premium
Click to expand...
Click to collapse
http://en.wikipedia.org/wiki/High-k_dielectric
http://www.chipworks.com/en/techni...resents-dualquad-core-32-nm-exynos-processor/
Sent from my SAMSUNG-SGH-I717 using xda premium
Radukk said:
http://en.wikipedia.org/wiki/High-k_dielectric
http://www.chipworks.com/en/techni...resents-dualquad-core-32-nm-exynos-processor/
Sent from my SAMSUNG-SGH-I717 using xda premium
Click to expand...
Click to collapse
Interesting reading. Thanks! :thumbup:
Sent from my GT-I9300 using xda premium
Radukk said:
http://en.wikipedia.org/wiki/High-k_dielectric
http://www.chipworks.com/en/techni...resents-dualquad-core-32-nm-exynos-processor/
Sent from my SAMSUNG-SGH-I717 using xda premium
Click to expand...
Click to collapse
For future reference I would never use wikipedia as a source of fact. Thanks for the other link, will update as soon as i can.
Sent from my DROIDX using xda premium
jamesnmandy said:
For future reference I would never use wikipedia as a source of fact. Thanks for the other link, will update as soon as i can.
Sent from my DROIDX using xda premium
Click to expand...
Click to collapse
You know, there's always the sources at the bottom of every Wikipedia article...
AndreiLux said:
You know, there's always the sources at the bottom of every Wikipedia article...
Click to expand...
Click to collapse
you are of course correct, which is why I always drill down and link to the sources not the article, just personal preference I suppose, but this isn't my idea, I think linking to wikipedia as a source of fact is generally frowned upon
no worries

Will it be possible to have 2 cpus ?

Will it be possible to have 2 cpus on the Ara.. It will be a beast if it could .. ( p.s. sorry if i have mistakes! )
51r said:
Will it be possible to have 2 cpus on the Ara.. It will be a beast if it could .. ( p.s. sorry if i have mistakes! )
Click to expand...
Click to collapse
I highly doubt it, I reckon the device would heat up so much and consume so much battery. Plus I think it will take a much longer time for two mobile cpu's to play nice with each other
I doubt it. It'd be cool if you could though but I still see no point as to why.
Yes, you could. The problem is that Android is not written to really use those two processors (its only recently getting support to use dual cores, much less quad) so it would just be a waste of energy and space.
good post
riahc3 said:
Yes, you could. The problem is that Android is not written to really use those two processors (its only recently getting support to use dual cores, much less quad) so it would just be a waste of energy and space.
Click to expand...
Click to collapse
I was going to suggest dual core. You beat me to it. Your post is good info; just like not jumping on a 64 bit bandwagon before devices have 8 or more GB of ram [not storage].
im sure it would be great to have two cpus but i feel like all that power would go to waste im sure it could bring more development but still what are you going to do with two cpu's at the current clock speeds we have now? the newest kindle fire is more powerful than my computer im sure quad cores are quite enough for phones cant believe they make octacores its a huge waste.
Dual processors in Project Ara devices.
Actually, from a functional standpoint, I see no reason to have dual CPUs. Android can't make use of a dual processor system, and if it could, what benefit would it provide in real time?
The system as it is, is too inefficient to handle the CPU commands, support the current demand of a dual CPU device.
With a dual CPU device, you also need to design additional power control regulation and filtering, additional battery support and ASIC devices to control the processor when demands are not being called upon, this adds a lot to the base architecture, and not really a financial benefit for a healthy profit margin. When you have finite board real estate for each individual module, you can't simply 'design-in' additional power control circuitry and maintain the same, or similar board dimensions, something has to give.
If we had everything we desired in a single device, I guarantess that device would be dimensionally unusable, the form factor would grow, costs would multiply, and with every feature added as 'standard', you would need to drag around a automotive-sized battery to operate all the options and features.
Personally, I prefer a robust Rf section, and then a modular antenna system that uses PIN diodes so I can select internal or external antennas if I desire. Next, I would like to have Bluetooth access to the entire phone system and file structure, so I can, in essence, 'clone' my phones parameters in a lab environment for testing applications and RF system compatibility.
The RF module should come standard with ALL known and used modulations, bands and coding, such as CDMA, GSM, WCDMA2000, TDMA, CQPSK and even 450 bands for Euro networks. Heck, I'd even like to see P25 thrown in for good measure, along with LTR and EDACS and OpenSky! ( I work with a LOT of RF radio networks, including trunked systems, so of course, I would love to have them all at my fingertips.
Off-Network communications is always a desire when you are in areas not served by cell sites, and point to point comms. is always useful.
Instead of sacrificing capabilities, how about increasing usefulness instead?
dual, quad, octa or more cpu cores are fit in one module i guess and yes android can't make use of dual cpu like servers.
2 cpus 1 phone
Sent from my Nexus 5 using Tapatalk
Maybe utilize a 4.0 GHz overclocked x64 cpu?
Since Google just helped develop a new CPU for Ara this may be possible now
I could see 2 cpus as like an either or situation. Heavy load. Use the one for performance. Screen off or battery saving mode. Use a decent single core thats geared towards battery life.
The thing about Project Ara is the aim seems to be to bring smartphones to the level of customization that we see in PCs. We could very well see some manufacturers who get on board with Ara eventually make SoCs that support dual processors if they feel there is a demand for it. Another interesting thought is if there comes about a project where we could design our own SoCs. Technically it's already possible if you are a hardware developer. I looked into what it would take to do it once, and from what I found it looks like you have to be a hardware developer, own your own company, and form a partnership with a chipset maker(I.E. Intel).
Current apps don't even use all 4 cores properly let alone adding a second cpu
Sent from my GT-N7100 using XDA Premium 4 mobile app
Perhaps software in the system settings could detect the second cpu and allow you to allocate more/less power to separate processes and assign different apps to different cpus.
Sent from my GT-S7560M using XDA Free mobile app
I think that 2 cores is possible. 2 CPU depends on whether android can run it
------------------------------------------------
Projectaratalk.com - a forum for google project ara users and developers
Since the Ara use Tegra x1 ,there's a great chance it has 2 cores.
Imagine how powerfull this phone will get in 1-2 years .. :thumbup::thumbup:
Sent from my GT-I8730 using XDA Free mobile app

Snapdragon 400 Cuad vs Snapdragon S4 Pro

Simple question, I purchased a Moto G GPE and its on its way but i have the curiosity about how the SOC Snapdragon 400 Cuad Core compares to something like the S4 Pro that was in the Nexus 4 which ran great. I know its a newer chip but i havent found any real evidence whether the architecture is superior or inferior to the S4 Pro since it was a flagship chip but the 400 is budget. Anyone has any insight on this? Thanks.
best way to compare the two.
http://en.wikipedia.org/wiki/Snapdragon_(system_on_chip)
mike21pr said:
Simple question, I purchased a Moto G GPE and its on its way but i have the curiosity about how the SOC Snapdragon 400 Cuad Core compares to something like the S4 Pro that was in the Nexus 4 which ran great. I know its a newer chip but i havent found any real evidence whether the architecture is superior or inferior to the S4 Pro since it was a flagship chip but the 400 is budget. Anyone has any insight on this? Thanks.
Click to expand...
Click to collapse
Snapdragon 400 is positioned more like S4 Plus, not Pro. As in marketing positioning.
Speaking in technical terms however, those two SoCs have a completely differnet architecture. S4 Plus always has Krait cores, while our S400 has Cortex-A7 cores, which is generally somewhat slower than Krait, partially because Krait is an Out-of-Order architecture, while Cortex-A7 is In-Order.
In terms of cores S4 Plus always had two cores (up to 1,5 GHz), while our S400 has four cores @ 1,2 GHz.
Now go figure what´s faster. I´d personally prefer two Kraits
Note: Qualcom had really messed up the naming in S400 series as the chips can contain either two Krait cores or four Cortex-A7 cores. Don´t get me wrong, Cortex-A7 is a great architecture , mostly because its performance/power consumption ratio, but the SoCs containg those cores should have been labeled as Snapdragon 350 or something like that.

GS7 Edge Processor

So the Qualcomm Snapdragon 820 is quad core. Why did Qualcomm decide to go with a quad core one over a octacore or hexacore? How would that affect the GS7/GS7 Edge if it were octacore or hexacore? How much of a difference is there between the Exynos and 820?
Indeed the snapdragon 820 is a quad core SOC unlike most recent socs which have featured 8 cores (2 clusters of 4 cores). However most big.little socs like the exynos 8890 and 7420 use 4 low power slow processors and 4 high power, but power hungry processors. They are completely different architectures. This means that while the exynos 8890 for instance has 8 cores. Only 4 of them are really designed for performance. The other 4 are designed to save power. The 820 is different. It's also some sort of big.little setup with 2 clusters of 2 cores. However both clusters are identical architectures. The difference is one cluster is clocked lower and has a different l2 cache configuration in order to use less power. On top of that the custom cores in the 820 are faster per core than the exynos 8890. So clock for clock the 820 would win against the high power cluster of the exynos. In heavily multitheaded situations though. The exynos still can tap into all 8 cores at the same time which should give it an advantage in that scenario. For the rest of the time I would imagine the 4 faster cores of the snapdragon would be better suited to everyday stuff. As for why they only went with 4. My guess is cost, and power efficiency. Kyro is a brand new architecture. Krait went through many iterations. Kyro will probably see a noticeable reduction in its power envelope in the next iteration which would make shoving more of them onto an SOC a more viable option. As for gpu, all signs are pointing to the snapdragons adreno GPU beating the Mali in the exynos atm. Development will also be improved on the snapdragon device as Qualcomm releases the proprietary vendor binaries and Samsung does not. This means the likelihood of seeing cm or aosp on an exynos variant is slim. Hope this helps!
Actually, the Kryo cores are (slightly) better at running single threaded tasks while the Exynos cores are (slightly) better at running multi-threaded tasks. I doubt the everyday users will notice.
The Adreno is also more powerful than the MALI GPU, though everyday users will mostly notice a performance improvement on applications using the Vulkan API vs regular applications, than anything between both these GPUs.
Finally the memory management seems much better on the Exynos 8890 for some reason (about twice as fast), since the same chips are used I wonder whether it's a software or a hardware implementation difference, both units are plenty fast though.
The real difference between both these SOCs will seen in the power management efficiency, in fact both variants are overpowered in every aspects as far as regular usage goes, so there is little point in comparing which one's the fastest. Instead, you need to wonder which one is the most conservative with power consumption while achieving equivalent performances.
Both the GPUs on these SOCs support the Vulcan API. And, whilst the Adreno is faster in terms of pure benchmark numbers, I very much doubt there will be a noticeable difference on any game or application, Vulcan or otherwise, that will be released during the lifetime of these phones.
Yeah, that does help explain it. Thanks. I just hoped there wouldn't be a TSMC vs Samsung difference in the iPhone 6S/6S Plus SoC.
i was thinking 8 core snapdargon 810 was over heating and thermal throttling so they went for 4 cores instead on snapdragon 820.
just my thoughts.

Categories

Resources