Should Intel Be Worried? - Galaxy S6 General

I know, this is one of those silly little topics that gets thrown around every time a newer faster arm chip comes out, but this is the first time that I personally have ever seen an Arm chip as a threat to intel. When I saw the Galaxy s6 scoring around a 4800 multi-core I stopped and thought to myself, "hey, that looks pretty darn close to my fancy i5." Sure enough, the I5 5200u only scores around a 5280 in the Geekbench 64 bit multi-core benchmark. I understand that this is only possible because the Galaxy S6 has 8 cores, but it's still very impressive what Arm and Samsung were able to achieve using a fraction of the power intel has on hand. Of course I don't think that this chip will take over the market, but if Arm's performance continues increase at the same rate while maintaining the same low power draw, then intel might have some real competition in the laptop space within the near future. Heck, maybe Microsoft will bring back RT but with full app support.
I also know that I didn't account for how much power the GPU was drawing, but I feel as if that wouldn't be the only factor after seeing the issues with Core M.

I doubt they're worried. intel CPUs are wicked fast. i have a 3 year old i7 and it's faster than most of AMDs current gen CPUs.
if Intel is able to apply the same method/engineering they use on CPUs to the mobile platform, i bet it will smoke anything out there. kind of like how intel CPUs kill basically anything AMDs can put out.

tcb4 said:
I know, this is one of those silly little topics that gets thrown around every time a newer faster arm chip comes out, but this is the first time that I personally have ever seen an Arm chip as a threat to intel. When I saw the Galaxy s6 scoring around a 4800 multi-core I stopped and thought to myself, "hey, that looks pretty darn close to my fancy i5." Sure enough, the I5 5200u only scores around a 5280 in the Geekbench 64 bit multi-core benchmark. I understand that this is only possible because the Galaxy S6 has 8 cores, but it's still very impressive what Arm and Samsung were able to achieve using a fraction of the power intel has on hand. Of course I don't think that this chip will take over the market, but if Arm's performance continues increase at the same rate while maintaining the same low power draw, then intel might have some real competition in the laptop space within the near future. Heck, maybe Microsoft will bring back RT but with full app support.
I also know that I didn't account for how much power the GPU was drawing, but I feel as if that wouldn't be the only factor after seeing the issues with Core M.
Click to expand...
Click to collapse
It is important to remember that ultimately the same constraints and limitations will apply to both Intel and ARM CPUs. After all ARM and x86 are just instruction set architectures. There is no evidence to suggest that somehow ARM is at a significant advantage vs Intel in terms of increasing performance while keeping power low. It has been generally accepted now that ISA's have a negligible impact on IPC and performance per watt. Many of these newer ARM socs like the 810 are having overheating issues themselves. The higher performance Nvidia SOCs that have impressive performance are using 10+ watts TDPs too.
Also it is always a bit tricky to make cross platform and cross ISA CPUs comparisons in benchmarks like GeekBench and for whatever reason Intel cpus tend to do relatively poorly in GeekBench compared to other benchmarks. You can try to compare other real world uses between the i5-5200U and the Exynos 7420 and I can assure you that the tiny Exynos will be absolutely no match to the much larger, wider and more complex Broadwell cores. Don't get me wrong, the Exynos 7420 is very impressive for its size and power consumption, but I don't think we can take that GeekBench comparison seriously.
The fastest low power core right now is without a doubt the Broadwell Core M which is a 4.5 watt part. This is built on Intel's 14nm process which is more advanced than Samsungs.
http://www.anandtech.com/show/9061/lenovo-yoga-3-pro-review/4
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
"Once again, in web use, the Core M processor is very similar to the outgoing Haswell U based Yoga 2 Pro. Just to put the numbers in a bit more context, I also ran the benchmarks on my Core i7-860 based Desktop (running Chrome, as were the Yogas) and it is pretty clear just how far we have come. The i7-860 is a four core, eight thread 45 nm processor with a 2.8 GHz base clock and 3.46 GHz boost, all in a 95 watt TDP. It was launched in late 2009. Five years later, we have higher performance in a 4.5 watt TDP for many tasks. It really is staggering."
"As a tablet, the Core M powered Yoga 3 Pro will run circles around other tablets when performing CPU tasks. The GPU is a bit behind, but it is ahead of the iPad Air already, so it is not a slouch. The CPU is miles ahead though, even when compared to the Apple A8X which is consistently the best ARM based tablet CPU.
"
---------- Post added at 04:46 AM ---------- Previous post was at 04:33 AM ----------
tft said:
I doubt they're worried. intel CPUs are wicked fast. i have a 3 year old i7 and it's faster than most of AMDs current gen CPUs.
if Intel is able to apply the same method/engineering they use on CPUs to the mobile platform, i bet it will smoke anything out there. kind of like how intel CPUs kill basically anything AMDs can put out.
Click to expand...
Click to collapse
This.
All of the little atom CPUs we see in mobile right now are much smaller, narrower and simpler cores than Intel Core chips. Once you see Intel big cores trickle down into mobile, it will get much more interesting.

Intel will catch up...quick too just watch. They've been working on 64-bit for over a year now...and they're already onto 14nm. Qualcomm should be worried, I don't think their ready for this competition. They talked trash about octa cores and 64-bits...now their doing both and seems their product is still in beta status, not ready for the real world. Intel and Samsung are gonna give them problems
Sent from my SM-G920T using XDA Free mobile app

rjayflo said:
Intel will catch up...quick too just watch. They've been working on 64-bit for over a year now...and they're already onto 14nm. Qualcomm should be worried, I don't think their ready for this competition. They talked trash about octa cores and 64-bits...now their doing both and seems their product is still in beta status, not ready for the real world. Intel and Samsung are gonna give them problems
Sent from my SM-G920T using XDA Free mobile app
Click to expand...
Click to collapse
Technically Intel and AMD have had 64 bit for well over a decade now with AMD64/EM64T and many Intel mobile processors have had it for years, so the HW has supported it for a while but 64 bit enabled tablets/phones haven't started shipping until very recently.
Indeed Intel has been shipping 14nm products since last year and their 14nm process is more advanced than Samsung's. Note that there is no real science behind naming a process node so terms like "14nm" and "20nm" have turned into purely marketing material. For example, TSMC 16nm isn't actually any smaller than their 20nm process. Presumably Intel 14nm also yields higher and allows for higher performance transistors than the Samsung 14nm.
It is likely that Samsung has the most advanced process outside of Intel however. I do agree that Qualcomm is in a bit of trouble at the moment with players like Intel really growing in the tablet space and Samsung coming out with the very formidable Exynos 7420 SOC in the smartphone space. The SD810 just isn't cutting it and has too many problems. Qualcomm should also be considered that both Samsung and Intel have managed to come out with high end LTE radios, this was something that Qualcomm pretty much had a monopoly on for years. Intel now has the 7360 LTE radio and Samsung has the Shannon 333 LTE.

rjayflo said:
Intel will catch up...quick too just watch. They've been working on 64-bit for over a year now...and they're already onto 14nm. Qualcomm should be worried, I don't think their ready for this competition. They talked trash about octa cores and 64-bits...now their doing both and seems their product is still in beta status, not ready for the real world. Intel and Samsung are gonna give them problems
Click to expand...
Click to collapse
i agree about Qualcomm, i actually mentioned that some time ago.
i think Qualcomm will happen what happened to nokia/blackberry, they got huge and stopped innovating and ended up being left in the dust. perhaps Qualcomm thought they had a monopoly and that samsung and other device makers would continue to buy their chips..
in the end, i think the only thing Qualcomm will have left is a bunch of patents..

I understand that Core M is a powerful part, but I'm not sure I believer their TDP figures. I am, however, more inclined to believe Samsung as they achieving this performance with an soc that is within a phone; in other words, they don't have the surface area to displace large quantities of heat. Nvidia has always skewed performance per watt numbers, and, as a result, they haven't been able to put an soc in a phone for years. Now, the reason I doubt intel's claims is because of battery life tests performed by reviewers and because of the low battery life claims made by manufacturers. For instance, the New Macbook and Yoga Pro 3 aren't showing large improvements in battery life when compared to their 15w counterparts.
I'm not sure how I feel about the iPad comparison though; I feel as if you just compounded the issue by showing us a benchmark that was not only cross platform, but also within different browsers.
Also, I think I understand what you mean about how an ISA will not directly impact performance per watt, but is it not possible that Samsung and Arm could just have a better design? I mean intel and AMD both utilize the same instruction set, but Intel will run circles around AMD in terms of efficiency. I may be way off base here, so feel free to correct me.
I think that Qualcomm is busy working on a new Krait of their own, but right now they're in hot water. They got a little lazy milking 32 bit chips, but once Apple announced their 64 bit chip they panicked and went with an ARM design. We'll have to see if they can bring a 64 bit Krait chip to the table, but right now Samsung's 7420 appears to be the best thing on the market.

tcb4 said:
I understand that Core M is a powerful part, but I'm not sure I believer their TDP figures. I am, however, more inclined to believe Samsung as they achieving this performance with an soc that is within a phone; in other words, they don't have the surface area to displace large quantities of heat. Nvidia has always skewed performance per watt numbers, and, as a result, they haven't been able to put an soc in a phone for years. Now, the reason I doubt intel's claims is because of battery life tests performed by reviewers and because of the low battery life claims made by manufacturers. For instance, the New Macbook and Yoga Pro 3 aren't showing large improvements in battery life when compared to their 15w counterparts.
Click to expand...
Click to collapse
Technically the Core M will dissipate more than 4.5w for "bursty" workloads but under longer steady workloads it will average to 4.5w. The ARM tablet and phone SOCs more or less do the same thing. In terms of actual battery life test results, yes the battery life of most of these devices hasn't really changed since the last generation Intel U series chips but that isn't a real apples to apples comparison. As SOC power consumption continues to drop, it is becoming a smaller and smaller chunk of total system power consumption. Lenovo did a poor job IMO in originally implementing the first Core M device but Apple will almost certainly do a much better job. The SOC is only one part of the system, it is the responsibility of the OEM to properly package up the device and do proper power management, provide an adequate battery etc. Yes the new Macbook doesn't get significantly longer battery life but it also weighs only 2.0 lbs and has a ridiculously small battery. It also has a much higher resolution and more power hungry screen and yet manages to keep battery life equal with the last generation. Benchmarks have also indicated that the newer 14nm Intel CPUs are much better at sustained performance compared to the older 22nm Haswells. This is something that phone and tablets typically are very poor at.
tcb4 said:
I'm not sure how I feel about the iPad comparison though; I feel as if you just compounded the issue by showing us a benchmark that was not only cross platform, but also within different browsers.
Click to expand...
Click to collapse
A very fair point, browser benchmarks are especially notorious in being very misleading. I think in this case Chrome was used in all cases which helps a little. My point in showing this is that we need to take those GeekBench results with a little grain of salt. Outside of that benchmark, I don't think you'll find the A8X or Exynos 7420 getting anywhere near a higher speced Core M let alone a i5-5200U at any real world use or any other benchmark, browser based or not. Even other synthetic benchmarks like 3dmark Physics, etc don't show the Intel CPUs nearly as low as GeekBench does.
tcb4 said:
Also, I think I understand what you mean about how an ISA will not directly impact performance per watt, but is it not possible that Samsung and Arm could just have a better design? I mean intel and AMD both utilize the same instruction set, but Intel will run circles around AMD in terms of efficiency. I may be way off base here, so feel free to correct me.
Click to expand...
Click to collapse
This is correct.
It is certainly possible for Samsung to have a design that is more power efficient than Intel when it comes to making a 2W phone SOC, but that won't be because Samsung uses ARM ISA while Intel uses x86. At this point, ISA is mostly just coincidental and isn't going to greatly impact the characteristics of your CPU. The CPU design and the ISA that the CPU uses are different things. The notion of "better design" is also a little tricky because a design that may be best for a low power SOC may not necessarily be the best for a higher wattage CPU. Intel absolutely rules the CPU landscape from 15w and up. Despite all of the hype around ARM based servers, Intel has continued to dominate servers and has actually continued to increase its lead in that space since Intel's performance per watt is completely unmatched in higher performance applications. Intel's big core design is just better for that application than any ARM based CPU's. It is important to remember that just because you have the best performance per watt 2 watt SOC, doesn't mean you can just scale that design into a beastly 90 watt CPU. If it were that easy, Intel would have probably easily downscaled their big core chips to dominate mobile SOCs.
You frequently find some people trying to reason that at 1.2 Ghz Apple's A8 SOC is very efficient and fast and then they claim that if they could clock that SOC at 3+ Ghz then it should be able to match an Intel Haswell core, but there is no guarantee that the design will allow such high clocks. You have to consider that maybe Apple made design choices to provide great IPC but that IPC came at the cost of limiting clock frequencies.

Related

Unlimited proof of quad core and and battery usage.

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Low power Processing brunt for phones, netbooks, laptops & tablets.
It seems ARM and Nvidia have big plans for the future.
Newer small chipsets like these will sport some serious multi core action.
Here's the basic road map.
Tegra 3 (Kal-El) series
Processor: Quad-core ARM Cortex-A9 MPCore, up to 1.5 GHz
12-Core Nvidia GPU with support for 3D stereo
Ultra Low Power GPU mode
40 nm process by TSMC
Video output up to 2560×1600
NEON vector instruction set
1080p MPEG-4 AVC/h.264 High Profile video decode
The Kal-El chip (CPU and GPU) is to be about 5 times faster than Tegra 2
Estimated release date is now to be Q4 2011 for tablets and Q1 2012 for smartphones, after being set back from Nvidia's prior estimated release dates of Q2 2011, then August 2011, then October 2011
The Tegra 3 is functionally a quad-core processor, but includes a fifth "companion" core. All cores are Cortex-A9's, but the companion core is manufactured with a special low power silicon process. This means it uses less power at low clock rates, but more at higher rates; hence it is limited to 500 MHz. There is also special logic to allow running state to be quickly transferred between the companion core and one of the normal cores. The goal is for a mobile phone or tablet to be able to power down all the normal cores and run on only the companion core — using comparatively little power — during "standby" mode or when otherwise using little CPU. According to Nvidia, this includes playing music or even video content.
Tegra (Wayne) series
Processor: Quad-core/Octa-core ARM Cortex-A15 MPCore (octa core already)
Improved 24 (for the Quad Version) and 32 to 64 (for the Octa-core Version) GPU cores with support for Directx 11+, OpenGL 4.X, OpenCL 1.X, and Physx
28 nm
About 10 times faster than Tegra 2
To be released in 2012
Tegra (Grey) series
Processor: ARM Cortex MPCore
28 nm
Integrated Icera 3G/4G baseband
To be released in 2012
Tegra (Logan) series
Processor: ARM ?
Improved GPU core
28 nm[23]
About 50 times faster than Tegra 2
To be released in 2013
Tegra (Stark) series
Processor: ARM ?
Improved GPU core
About 75 times faster than Tegra 2[24]
To be released in 2014
THE TEGRA 3's SECERET CORE? what does it do?
There's a not-so-dirty little secret about NVIDIA's upcoming Tegra 3 platform (which will soon find a home in plenty of mobile devices): the quad-core processor contained within has a fifth core for less intensive tasks.
In a paper published by NVIDIA, they provided in-depth details about their Variable Symmetric Multiprocessing (vSMP). Simply put, vSMP implemented in Kal-El not only optimizes CPU multi-threading and multi-tasking for max performance and power efficency at a moments notice, but it offloads background tasks and less intensive CPU activities, such as background syncing/updating, music playback, video playback to the fifth core, which runs at a considerably slower 500 MHz, and therefore consumes considerably less power. Bottom line: battery life!
All five CPU cores are identical ARM Cortex A9 CPUs, and are individually enabled and disabled (via aggressive
power gating) based on the work load. The "Companion" core is OS transparent, unlike current Asynchronous SMP architectures, meaning the OS and applications are not aware of this core, but automatically take advantage of it. This strategy saves significant software efforts and new coding requirements.
The Tegra 3 logic controller also has the power to dynamically enable and disable cores depending on the workload at any given time, making sure not to waste any power from unused cores. So what's the quantitative payoff?
NVIDIA ran Coremark benchmark using the Tegra 3 and pitted it against against current industry chipsets, such as the TI OMAP4 and the Qualcomm 8x60 (take these with a grain of salt, obviously). They found that when handling the same workload, Tegra 3 consumed 2-3x less power than the competition. When running max performance tests, the Tegra 3 was twice as fast while still using less power.
Compared to the Tegra 2 chipset, the Tegra 3 CPU uses 61% less power while video playback is happening, and 34% less power during gaming activities. While most of the work is done by the GPUs in mobile devices, previous chipsets lacked the ability to ramp down the energy output of unused cores like Tegra 3 is purportedly able to do.
What I'm trying to say is that you should be excited for this mobile quad-core processor to arrive and not scared for your battery life.
5 cores already?! Here I am sitting with aluminum foil ghetto heatsink between my x10 and battery to dissapate heat from 1.2ghz of increasingly mediocre temp bound performance, wondering when to jump ship to dual core. And now 5core is around the years corner...
Nonetheless... thanks for the info omegaRED
This is going to be huge...
I bet you're going to see a lot of uneducated posts in this thread about more cores and battery usage, etc...
Lol
*waits for the **** storm*
scoobysnacks said:
This is going to be huge...
I bet you're going to see a lot of uneducated posts in this thread about more cores and battery usage, etc...
Lol
*waits for the **** storm*
Click to expand...
Click to collapse
here comes scooby again.lol/come back for everything
josephnero said:
here comes scooby again.lol/come back for everything
Click to expand...
Click to collapse
This is contributing to the thread how?
Let me explain this thing in laid mans terms.
It has 5 cores.. 1 runs at 500Mhz at all times and handles most background processes.
Video
Media
ect...
Due to it's 500mhz speed it's battery usage is very low.
The other 4 cpu's running at 1.5Ghz can handle all the good stuff..
games
physix
media processing
and can be put into a off like deep sleep when going offline if not required for the task.
The 500mhz cpu and ultra low power nvidia GPU keeps everything going when all 4 cores go offline.
Also the 28nm silicon wafer technology.
The power if this design can compete with any console on the market while keeping your device going for much much longer than current chipsets.
OmegaRED^ said:
Let me explain this thing in laid mans terms.
It has 5 cores.. 1 runs at 500Mhz at all times and handles most background processes.
Video
Media
ect...
Due to it's 500mhz speed it's battery usage is very low.
The other 4 cpu's running at 1.5Ghz can handle all the good stuff..
games
physix
media processing
and can be put into a off like deep sleep when going offline if not required for the task.
The 500mhz cpu and ultra low power nvidia GPU keeps everything going when all 4 cores go offline.
Also the 28nm silicon wafer technology.
The power if this design can compete with any console on the market while keeping your device going for much much longer than current chipsets.
Click to expand...
Click to collapse
Ooh I agree completely and already understand this stuff.
I'm in the industry..
this will definitely help those who argue that more cores equals more battery drain, and who don't understand power distribution.
Compared to a core2duo
a hypothetical Tegra [email protected] would have scored 17,028 points, again beating the Core 2 Duo using the same compiler settings. If we extend the projections to a hypothetical 2.5GHz Cortex-A9 chip, we arrive at 28,380 CoreMarks, which is the very least we should expect from Qualcomm's recently announced Cortex-A15 based chip at 2.5GHz.
There's always a bigger fish.
2.5Ghz.. O_O
The quad core is just a quad core.. but usage is 25% less.
Anyone that takes this over the Tegra3..
Enjoy the 30 second battery life.
It's a massive stride forward.!!
But it's still too hungry.
I wish phone developers would shove their battery usage predictions and add 1 to 2 Amps to the projected figure.
sweet, I'll definitely look into new SE products, great hardware
Love the name of the pocessor, Kal-El
The future of smartphones is bright Look forward to purchasing a new phone in oh.... 2 years. When that time comes I'll make sure to investigate whats the best on the market rather than going in blind

[Q] Would u like Quad or Dual in your next note?

Since the launch of SGS3 is around the corner and the next note will probably come within next few months, I thought of starting this thread to know how many users prefer having Quad Exynos 4 ( similar to SGS3 which is based on A9 arcitecture with Mali 400 GPU built using 32nm manufacturing process) or dual Exynos 5 (A15 architecture with Mali T604 GPU which is based on probably 28nm manufacturing process)in our next Note...
Cast your votes in the poll
You should put a POLL, it would get more people interested. But for me, I'd rather get the A15 with the Mali 604T since A15 is supposedly to be 40% faster than A9 and the Mali 604T will blow the Mali 400 away.
Definitely the dual A15 with Mali 604. No doubt.
Sent from my superior GT-N7000 using Tapatalk
I dont see any benifit by haveing a quad core cpu. Most apps dont even use the duel core.
Cant fault my note at all. So just the new duel will do with less battery drain
Sent from my GT-N7000 using xda premium
Quad! I don't care if I don't use it, and I don't care if I don't need it.
It just feels good to have that much power in the palm of your hand.
I'll benefit from that much power since I play games and I look forward to more capable emulators in the future.
I don't give a CRAP about the amount of cores!
I want the most speed that's possible, if that would be with dthe dual i take that, if it's with de quad, then thats my way to go...
Can't vote in the poll because i want speed, and since it's not sure wich one is faster i can't vote!
PS
I think the Exynos 5 will be released @ the end of this year, and the Exynos 4 tomorow
If that's correct i go with the Exynos 4, i hate waiting
what the note lacks is a decent GPU. the current GPU can't efficiently handle the 1280x800 pixels. however what i want more than anything is 1. non-pentile screen that is FLAWLESS and 2. a bigger battery still ~3000 mAh like the RAZR max. I would gladly sacrifice a few mm for a larger battery. I find it stupid how HTC decided to go with a slim and NON-REMOVABLE battery and storage to save a few mm. Seriously? This is why HTC is falling in a deep pit.
Exynos 5 dual, it has more power and is more efficient
Sent from my GT-N7000 using XDA
EASILY the A15 with the T-604! Come to papa!
The fastest clock speed and the best GPU is all that matters. 2.2 ghz 2 core with a fab GPU will blow away a 10 core 1.0 ghz with a bad gpu everyday every way.
How about the beast Quad Core A15 Exynos 5450 with Mali T-658? Ok, ok, I know technically it hasn't been built yet and will probably be for tablets, but wouldn't mind seeing it in the Note since it is a tab/phone hybrid.
But as for the current SoC's available now, I would take the A15 dual Exynos 5250 with Mali T-604.
More likely, I think Samsung's road map would be to release the flagship Galaxy S lines (in this case the GS 3) with the latest SoC's, then the next Note (Note 2 in this case) would get a slight spec bump based on the Galaxy S 3 with a faster clocked CPU/GPU combo of the Galaxy S 3 line 6 months later, then the GS4 would get next Gen SoCs with the Note 3 getting a spec bump of the GS 4 SoCs, etc.....
I am sorry.. but this amounts to techie circle jerking..
Quad core processors came out for the PC when not a single application could even use two cores, much less four.. Even today, several years later, for the very very vast majority of applications, it is hard to get a PC to run more that one and a bit processors.. My i7 snoozes, and even cranking up real time low latency audio(a stressful activity)it runs 2 processors at 30% and one at 5%
Therefore I frankly do not care if they put a hamster and a wheel inside the device...as long as the results in operation of the device meets my needs.
So, given my customer needs are for smoother, faster and more reliable operation with better battery life and an enhanced user experience, Samsung can put whatever they want into the device...
In saying that, decisions by the majority of folks are driven by what they think the specifications mean, rather than the impact or result of those specifications in real life usage, so while i am sure its not necessary, a next Note will for sure have a quad core.
With a single core my galaxy s with ics is snappier than my note. Finally its the software I guess.
Sent from my GT-I9000 using XDA
Mystic38 said:
I am sorry.. but this amounts to techie circle jerking..
Quad core processors came out for the PC when not a single application could even use two cores, much less four.. Even today, several years later, for the very very vast majority of applications, it is hard to get a PC to run more that one and a bit processors.. My i7 snoozes, and even cranking up real time low latency audio(a stressful activity)it runs 2 processors at 30% and one at 5%
Therefore I frankly do not care if they put a hamster and a wheel inside the device...as long as the results in operation of the device meets my needs.
So, given my customer needs are for smoother, faster and more reliable operation with better battery life and an enhanced user experience, Samsung can put whatever they want into the device...
In saying that, decisions by the majority of folks are driven by what they think the specifications mean, rather than the impact or result of those specifications in real life usage, so while i am sure its not necessary, a next Note will for sure have a quad core.
Click to expand...
Click to collapse
I agree. Android multitasking would need to be vastly different than what it is today, and on top of this the RAM specs need a major bump to even begin to show advantages in multi-core processing.
Also like you said, it has not mattered for deskptops and laptops what the real-world benefits are, just what the consumer feels about the value in their purchase. Nowadays it seems people are more concerned with the number of cores as opposed to the clock speed.
I do like the approach that Ti has taken with the OMAP in dedicating low-power cores to low-power functions, and feel that it really has potential in mobile devices, but they seem to be a step behind when it comes to the bigger tasks of mobile processing. Intel being on the cusp of Haswell has me excited to see what they can do in this territory.
Dual Exynos 5 for me at the moment.
It'll be interesting to see how they market this dual core a15 processor because joe public, will always think more cores is better. I do feel though that the note 2 might not have the same internals as the s3, like our notes had the same as the s2. For the note they seemed to put in all the best tech they had on offer at the tine, so if the a15 is ready to go by November time then I think they'll defo use it unless something better is available.
Dual core with speed.
Quad cores mean squat if they slow the primary usage down.
I'd rather get a dual than a quad even if its on the same generation and process so long as it is clocked higher. Give me a smaller process, newer gen chip and better gpu? There is no choice.
Id go for the i7 3960x and gtx 690 if they can squeeze that in the next note but I think I wont get a choice and will just end up with whatever Samsung puts into the note 2.
Mystic38 said:
I am sorry.. but this amounts to techie circle jerking..
Quad core processors came out for the PC when not a single application could even use two cores, much less four.. Even today, several years later, for the very very vast majority of applications, it is hard to get a PC to run more that one and a bit processors.. My i7 snoozes, and even cranking up real time low latency audio(a stressful activity)it runs 2 processors at 30% and one at 5%
Therefore I frankly do not care if they put a hamster and a wheel inside the device...as long as the results in operation of the device meets my needs.
So, given my customer needs are for smoother, faster and more reliable operation with better battery life and an enhanced user experience, Samsung can put whatever they want into the device...
In saying that, decisions by the majority of folks are driven by what they think the specifications mean, rather than the impact or result of those specifications in real life usage, so while i am sure its not necessary, a next Note will for sure have a quad core.
Click to expand...
Click to collapse
I agree with you....the main reason I created this thread, because I wanted to know how many members actually know the effect of system architecture and the manufacturing process will affect the day to day performance of the device, battery consumption etc.,it was never about the software but I know it everything comes to the OS how deeply it is integrated with the hardware and how effectively it co-ordinates with them...this is why Apple's devices are snappier than the android...the problem here is Samsung is more concerned about bringing more devices out than focusing on the system's deep integration...so it only comes to the fact that the thread is only about the hardware... but the discussion about the embedded systems is also welcomed....
adelmundo said:
How about the beast Quad Core A15 Exynos 5450 with Mali T-658? Ok, ok, I know technically it hasn't been built yet and will probably be for tablets, but wouldn't mind seeing it in the Note since it is a tab/phone hybrid.
But as for the current SoC's available now, I would take the A15 dual Exynos 5250 with Mali T-604.
More likely, I think Samsung's road map would be to release the flagship Galaxy S lines (in this case the GS 3) with the latest SoC's, then the next Note (Note 2 in this case) would get a slight spec bump based on the Galaxy S 3 with a faster clocked CPU/GPU combo of the Galaxy S 3 line 6 months later, then the GS4 would get next Gen SoCs with the Note 3 getting a spec bump of the GS 4 SoCs, etc.....
Click to expand...
Click to collapse
I heard that Note 10.1 tablet is being delayed because Samsung wanted the device with quad than dual...so there is a little chance that the next Hybrid Note will come with some other spec....

Galaxy S III Processor Information

Disclaimer:
I make no assertion of fact on any statement I make except where repeated from one of the official linked to documents. If it's in this thread and you can't find it in an official document, feel free to post your corrections complete with relevant link and the OP can be updated to reflect the most correct information. By no means am I the subject matter expert. I am simply a device nerd that loves to read and absorb information on such things and share them with you. The objective of this thread is to inform, not berate, dis-credit, or otherwise talk trash about someone else's choice. Take that to a PM or another thread please.
There is a LOT of misconception in the community over what hardware is the more capable kit. They are not the same. Therefore comparing them in such a way can be difficult at best. The Ti White Sheet speaks to the many aspects of attempting to do such a thing. It is no small undertaking. Therefore I ask you trust their data before my opinion. However, I felt it necessary to have something resembling a one-stop thread to go to when you are wondering about how the hardware differs between the two devices.
One thing you won't see me doing is using pointless synthetic benchmarks to justify a purchase or position. I use my device heavily but not for games. Web browsing, multiple email, etc......I use LTE heavily. A device can get a bajillion points in but that doesn't make me any more productive. Those into gaming will probably be more comfortable with the EU Exynos Quad, I'll just say that up front....it has a stronger GPU, but this isn't about that. It would be nice to keep this thread about the technology, not synthetic benchmark scores.
Dictionary of Terms (within thread scope):
SGSIII: Samsung Galaxy S 3 smartphone, variant notwithstanding
Samsung: manufacturer, proprietor of the Galaxy S III smartphone. Also responsible for designing the Exynos cpu used in the International variant of the SGSIII.
ARM: Processor Intellectual Property Company, they essentially own the IP rights to the ARM architecture. The ARMv7 architecture is what many processors are based upon at the root, this includes the Exynos by Samsung and the Krait S4 by Qualcomm, as used in the SGSIII as well as many others. It's like the basic foundation with the A9 and A15 feature sets being "options" that Samsung and Qualcomm add on.
Qualcomm: Like Samsung, they are a manufacturer of processors, their contribution here is the S4 Krait cpu used in the US/Canadian market SGSIII smartphone.
CPU: processor, central processing unit, it's the number crunching heart of your phone, we are interested in two here, Samsung's Exynos and Qualcomm's Krait.
As most everyone knows by now, the EU and US variants of the SGSIII come with two different cpu's in them. The EU has the Samsung Exynos, the US the Qualcomm S4 Krait. One major reason if not the only reason I am aware of is the inability of Exynos to be compatible with LTE radio hardware. Qualcomm's S4 Krait however has the radio built into the package. It's an all in one design where Exynos is a discreet cpu and has to depend on secondary hardware for network connectivity. Obviously there are power implications any time you add additional hardware because of redundancy and typical losses.
However the scope of this thread is to point out some differences between the two very different cpu's so that you, the consumer, can make an educated decision based on more than a popularity contest or the "moar corez is bettar!" stance.
Anyone who is into computers fairly well knows the "core counting" as a determination of performance is risky at best. Just as with the megahertz wars of the 1990's....hopefully by now you all know not every 2Ghz CPU is the same, and not every CPU core is the same. You cannot expect an Intel 2Ghz CPU to perform the same as an AMD 2Ghz CPU. It's all about architecture.
Architecture for the purpose of this thread is limited to the ARMv7 architecture and more specifically the A9 and A15 subsets of the architecture. Each architecture supports certain features and instruction sets. Additionally the internal physical parts of the core vary from one architecture to the next.
A9 is older technology in general while A15 is much newer. Exynos is A9 based, Krait S4 is A15 based. Lets look at the differences.
When looking at the two, one must understand that some of the documentation available is comparing what was available at the time they were written. In most cases A9 info is based on the 40nm manufacturing process. Samsung's Exynos is built using newer 32nm HKMG manufacturing processes. Qualcomm S4 Krait is built on a diferent smaller TSMC 28nm manufacturing process. Generally speaking, the smaller the process, the less heat and power loss. There is also power leakage, etc......not going to get into it because frankly, I haven't read enough to speak to it much. But don't take my word for it.
There is a lot of information out there but here are a few links to good information.
Exynos 32nm Process Info
Qualcomm S4 Krait Architecture Explained
Ti A15 White Papers
ARM Cortex A9 Info
ARM Cortex A15 Info
Samsung Exynos 4412 Whitesheet
Exploring the Design of the A15 Processor
I could link you to all sorts of web benchmarks and such, but to be honest, none of them are really complete and I have not yet found one that can really give a unbiased and apples to apples comparison. As mentioned previously most of them will compare the S4 Krait development hardware to the older 40nm Samsung Exynos hardware......which really doesn't represent what is in the SGSIII smartphones.
Now a few take aways that to me stood out from my own research. If you are unable to read someone's opinion without getting upset please don't read on from here.
The Exynos EU variant that does not support LTE is on paper going to use more power and create more heat due to it simply needing to rely on additional hardware for it's various functions where the S4 Krait has the radio built in. This remains to be seen but battery life would be the biggest implication here. Although Samsung's latest 32nm HKMG process certainly goes a long way towards leveling the playing field.
The Exynos variant is built on older A9 core technology and when comparing feature sets, does not support things such as virtualization. Do you need VT for your phone? Only if the devs create an application for it, but I believe the ability to dual boot different OS'es is much easier done with VT available.
In contrast the S4 Krait core does support this feature. I would like to see about dual booting Windows Phone 8 and Android and I hope having the hardware support and additional ram (EU version has 1GB ram, US has 2GB ram) will help in this area. Actual VT implementation may be limited in usefulness, to be seen.
The S4 Krait/Adreno 225 package supports DirectX 9.3, a requirement for Windows RT/Windows 8 Phone(not sure if required for Phone version). In contrast Exynos Quad/Mali400 does not support DirectX 9.3 and may or may not be able to run Windows RT/Windows 8 Phone as a result. From what I understand Windows Phone 8 may be an option.
Code compiled for the A9 derived Exynos has been around for quite some time as opposed to A15 feature code. I really don't know much about this myself, but I would expect that the A15 based solution is going to have much longer legs under it since it supports everything the Exynos Quad does plus some. My expectation is that with time the code will be optimized for the newer A15 architecture better where the Exynos A9 is likely much more mature already. It could be we see a shift in performance as the code catches up to the hardware capabilities. Our wonderful devs will be a huge influence on this and where it goes. I know I would want to develop to take advantage of the A15 feature sets because they are in-addition-to the A9 feature sets that they both support.
My hope is that anyone who is trying to make a good purchasing decision is doing so with some intent. Going with a EU SGSIII when you want to take advantage of LTE data is going to cause you heartache. It cannot and will not work on your LTE network. Likewise, if you live somewhere where LTE doesn't exist or you simply don't care to have that ability, buying the US SGSIII may not be the best choice all things considered. So in some cases you see the CPU might not be the gating item that causes you to choose one way or another.
Todays smartphones are powerful devices. In todays wireless world many times our hardware choice is a two year long commitment, no small thing to some. If you have specific requirements for your handset, you should know you have options. But you should also be able to make an educated decision. The choice is your's, do with it what you will.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
SlimJ87D said:
Click to expand...
Click to collapse
One thing you won't see me doing is using pointless synthetic benchmarks to justify a purchase or position. I use my device heavily but not for games. Web browsing, multiple email, etc......I use LTE heavily. A device can get a bajillion points in <insert your choice of synthetic benchmark> but that doesn't make me any more productive. Those into gaming will probably be more comfortable with the EU Exynos Quad, I'll just say that up front....it has a stronger GPU, but this isn't about that. It would be nice to keep this thread about the technology, not synthetic benchmark scores.
Click to expand...
Click to collapse
This is not a benchmark comparison thread, as simply put in the OP. Please create a synthetic benchmark thread for synthetic benchmark comparisons. Please read the OP before commenting. I was really hoping you were going to offer more technical information to contribute as you seem to be up to date on things. I expected more than a cut and paste "me too" synthetic benchmark from you....congrats, you can now run Antutu faster....
Thanks for info but Qualcomm's architecture is not quite following ARM's blueprint/guideline. They did huge modification on their first AP(Snapdragon 1) to push over 1GHz and it causes low power efficiency/application compatibility/heat issue compared to Sammy's legit 1Ghz Hummingbird. And for some reason Qualcomm decide to improving their mal-functioning architecture(scorpion) instead of throwing it away and their inferiority continues through all scorpion chips regardless of generation. Their only sales point and benefit was one less chip solution, and LTE band chip nowadays.
Personally I don't think S4 based on A15 architecture and it is slower than International note's exynos in many comparing benchmarks/reviews.
Exynos in GS3 is made on 32nm node, which is better than 45nm one in note. I don't think Sammy figured out yet the ideal scheduler that android system and applications to use those four core efficiently, but it will show significant performance boost over coming updates as shown on GS2 case.
Sent from my SAMSUNG-SGH-I717 using xda premium
Radukk said:
Thanks for info but Qualcomm's architecture is not quite following ARM's blueprint/guideline. They did huge modification on their first AP(Snapdragon 1) to push over 1GHz and it causes low power efficiency/application compatibility/heat issue compared to Sammy's legit 1Ghz Hummingbird. And for some reason Qualcomm decide to improving their mal-functioning architecture(scorpion) instead of throwing it away and their inferiority continues through all scorpion chips regardless of generation. Their only sales point and benefit was one less chip solution, and LTE band chip nowadays.
Personally I don't think S4 based on A15 architecture and it is slower than International note's exynos in many comparing benchmarks/reviews.
Exynos in GS3 is made on 32nm node, which is better than 45nm one in note. I don't think Sammy figured out yet the ideal scheduler that android system and applications to use those four core efficiently, but it will show significant performance boost over coming updates as shown on GS2 case.
Sent from my SAMSUNG-SGH-I717 using xda premium
Click to expand...
Click to collapse
The fact both cpu's are modified versions of their ARM derived variants is captured in the OP, as is the fact that most if not all comparisons reference the 40nm Exynos as opposed to the newer 32nm process, also mentioned in the OP.
Thanks
Why would windows environment even matter at this moment?
Isn't MS setting the hardware specs for the ARM version of the devices?
As for LTE compatibility, it's getting released in korean market with LTE and 2GB of RAM supposedly and this was the speculation from the beginning.
spesific discussion of the processors is different to general discussion on comparison.
thread cleaned. please keep to this topic?
jamesnmandy said:
When looking at the two, one must understand that some of the documentation available is comparing what was available at the time they were written. In most cases A9 info is based on the 40nm manufacturing process. Samsung's Exynos is built using newer 32nm HKMG manufacturing processes. Qualcomm S4 Krait is built on a newer smaller 28nm manufacturing process. Generally speaking, the smaller the process, the less heat and power a cpu will generate because of the much denser transistor count. There is also power leakage, etc......not going to get into it because frankly, I haven't read enough to speak to it much.
Software written for the A9 derived Exynos has been around for quite some time as opposed to A15 feature code. I really don't know much about this myself, but I would expect that the A15 based solution is going to have much longer legs under it since it supports everything the Exynos Quad does plus some. My expectation is that with time the code will be optimized for the newer A15 architecture better where the Exynos A9 is likely much more mature already. It could be we see a shift in performance as the code catches up to the hardware capabilities. Our wonderful devs will be a huge influence on this and where it goes. I know I would want to develop to take advantage of the A15 feature sets because they are in-addition-to the A9 feature sets that they both support.
Click to expand...
Click to collapse
First of all, Samsung's 32nm HKMG process is superior and more power efficient than TSMC's 28nm that's being used in Krait right now, even if the feature size is slightly bigger. HKMG overall is absolutely big step in transistor leakage, and on top of that Samsung integrated a lot of low level electrical engineering tricks to lower the power usage as load biasing. The quadcore with last generation architecture is toe-to-toe with the dual-core Kraits in maximum power dissipation if you normalize for battery size as per Swedroid's tests: http://www.swedroid.se/samsung-galaxy-s-iii-recension/#batteri
And the Kraits and A15 are supposed to be much more power efficient in W/MHz, so it just proves how much of a manufacturing advantage Samsung has here.
Secondly, that paragraph about software written for A15 is absolutely hogwash. You don't write software for A15, the compiler translates it into instruction code which the A15 is equal to an A9, i.e. ARMv7-A. The only difference is the SIMD length which is being doubled/quadrupled, again something you don't really program for in most cases. A15 is mostly IPC improvements and efficiency improvements, not a new instruction set to warrant difference in software, else nothing would be compatible.
DX9.3 compliance is senseless in that regard too as until we'll need any of that the new generation of SoCs will be out so I don't know why we're even bringing this up as nobody's going to hack Windows 8 into their Galaxy S3.
I really don't see the point of this thread as half of it is misleading and the other half has no objective.
AndreiLux said:
First of all, Samsung's 32nm HKMG process is superior and more power efficient than TSMC's 28nm that's being used in Krait right now, even if the feature size is slightly bigger. HKMG overall is absolutely big step in transistor leakage, and on top of that Samsung integrated a lot of low level electrical engineering tricks to lower the power usage as load biasing. The quadcore with last generation architecture is toe-to-toe with the dual-core Kraits in maximum power dissipation if you normalize for battery size as per Swedroid's tests: http://www.swedroid.se/samsung-galaxy-s-iii-recension/#batteri
And the Kraits and A15 are supposed to be much more power efficient in W/MHz, so it just proves how much of a manufacturing advantage Samsung has here.
Secondly, that paragraph about software written for A15 is absolutely hogwash. You don't write software for A15, the compiler translates it into instruction code which the A15 is equal to an A9, i.e. ARMv7-A. The only difference is the SIMD length which is being doubled/quadrupled, again something you don't really program for in most cases. A15 is mostly IPC improvements and efficiency improvements, not a new instruction set to warrant difference in software, else nothing would be compatible.
DX9.3 compliance is senseless in that regard too as until we'll need any of that the new generation of SoCs will be out so I don't know why we're even bringing this up as nobody's going to hack Windows 8 into their Galaxy S3.
I really don't see the point of this thread as half of it is misleading and the other half has no objective.
Click to expand...
Click to collapse
So i am happy to make corrections when unbiased data is presented. I will look into some of your claims for myself and update accordingly but as mentioned in the OP if you would like to cite specific sources for any thing, please include links. Thank you for your input. The entire point of the thread is to document the differences because a lot of people seem to be looking at the choice as simply 4 or 2 cores and in similar fashion they gravitate to the bigger number without understanding what they are buying into. Some of your statements claim "hogwash", as mentioned I am learning myself and hope to rid the post of any hogwash asap. I for one will be trying to get Windows 8 Phone to boot on it if possible, I tried to clarify in the OP Windows Phone 8 while Windows 8 RT certainly looks to be a stretch. Thanks
Sent from my DROIDX using xda premium
AndreiLux said:
First of all, Samsung's 32nm HKMG process is superior and more power efficient than TSMC's 28nm that's being used in Krait right now, even if the feature size is slightly bigger. HKMG overall is absolutely big step in transistor leakage, and on top of that Samsung integrated a lot of low level electrical engineering tricks to lower the power usage as load biasing. The quadcore with last generation architecture is toe-to-toe with the dual-core Kraits in maximum power dissipation if you normalize for battery size as per Swedroid's tests: http://www.swedroid.se/samsung-galaxy-s-iii-recension/#batteri
And the Kraits and A15 are supposed to be much more power efficient in W/MHz, so it just proves how much of a manufacturing advantage Samsung has here.
Secondly, that paragraph about software written for A15 is absolutely hogwash. You don't write software for A15, the compiler translates it into instruction code which the A15 is equal to an A9, i.e. ARMv7-A. The only difference is the SIMD length which is being doubled/quadrupled, again something you don't really program for in most cases. A15 is mostly IPC improvements and efficiency improvements, not a new instruction set to warrant difference in software, else nothing would be compatible.
DX9.3 compliance is senseless in that regard too as until we'll need any of that the new generation of SoCs will be out so I don't know why we're even bringing this up as nobody's going to hack Windows 8 into their Galaxy S3.
I really don't see the point of this thread as half of it is misleading and the other half has no objective.
Click to expand...
Click to collapse
Which test on that review shows the 32nm 4412 being more efficient than the 28nm 8260 on the one s?
VT has nothing to do with dualbooting. That only requires a bootloader and of course the operating systems supporting sharing their space.
It might allow with a lot of work to get Android to run under WP8 or WP8 to run under Android in a virtual machine.
The most interesting feature you could achieve with VT is to have two copies of the same operating system running with their own data, cache and storage partitions each. This would allow corporta BYOD to remain more-or-less secure and enforce corporate policies on Exchange clients without requiring the user's private part of the phone to be affected by these restrictions.
However after years of development 3d performance on x86 (desktop) platforms is mediocre at best with imho Microsoft Hyper-V being the current winner.
Additionally you claim that WP8 will work on the Qualcom chip.
This is simply not true: it MIGHT work if a build for that exact SoC is produced (somewhat unlikely).
Since WP8 is closed-source and the hardware is proprietary binary too it won't be portable.
This is due to ARM not being like x86 a platform with extensible capability and plg&play support but rather an embedded system where the software has to be developed for each device individually.
nativestranger said:
Which test on that review shows the 32nm 4412 being more efficient than the 28nm 8260 on the one s?
Click to expand...
Click to collapse
You have the full load test and the temperature, in the link that I posted. Normalize them for battery size, for example to the Asus Padphone (Or the One S for that matter, they similar in their result) at 3.7V*1520mAh = 6.1Wh and the S3 at 3.8V*2100mAh = 7.98Wh >> 30.8% increase. Nomarlize the S3's 196 minutes by that and you get 149 minutes. Take into account how the S3's screen is bigger and higher resolution and the result will be more skewed towards the S3. So basically a four core last generation at full load on all four cores is arguably toe-to-toe in maximum power dissipation to a next-generation dual core. The latter should have been the winner here by a large margin, but it is not. We know it's not due to architectural reasons, so the only thing left is manufacturing. HKMG brings enormous benefits in terms of leakage and here you can see them.
d4fseeker said:
VT has nothing to do with dualbooting. That only requires a bootloader and of course the operating systems supporting sharing their space.
It might allow with a lot of work to get Android to run under WP8 or WP8 to run under Android in a virtual machine.
The most interesting feature you could achieve with VT is to have two copies of the same operating system running with their own data, cache and storage partitions each. This would allow corporta BYOD to remain more-or-less secure and enforce corporate policies on Exchange clients without requiring the user's private part of the phone to be affected by these restrictions.
However after years of development 3d performance on x86 (desktop) platforms is mediocre at best with imho Microsoft Hyper-V being the current winner.
Additionally you claim that WP8 will work on the Qualcom chip.
This is simply not true: it MIGHT work if a build for that exact SoC is produced (somewhat unlikely).
Since WP8 is closed-source and the hardware is proprietary binary too it won't be portable.
This is due to ARM not being like x86 a platform with extensible capability and plg&play support but rather an embedded system where the software has to be developed for each device individually.
Click to expand...
Click to collapse
Changed text to read
From what I understand Windows Phone 8 may be an option.
Click to expand...
Click to collapse
and
Actual VT implementation may be limited in usefulness, to be seen.
Click to expand...
Click to collapse
TSMC is struggling with their 28nm node and failed to bring up yield rate through High-K Metal Gate process, so they announced they will keep 28nm SiON for now. The problem is when node become more dense, voltage leakage rate increases geometrically. HKMG itself reduces that leakage to about 100 times less as stated by Global Foundry and Samsung. That is why 32nm HKMG is way more superior than 28nm SiON. You can easily find related article, PR data and detailed chart about this.
Sent from my SAMSUNG-SGH-I717 using xda premium
Radukk said:
TSMC is struggling with their 28nm node and failed to bring up yield rate through High-K Metal Gate process, so they announced they will keep 28nm SiON for now. The problem is when node become more dense, voltage leakage rate increases geometrically. HKMG itself reduces that leakage to about 100 times less as stated by Global Foundry and Samsung. That is why 32nm HKMG is way more superior than 28nm SiON. You can easily find related article, PR data and detailed chart about this.
Sent from my SAMSUNG-SGH-I717 using xda premium
Click to expand...
Click to collapse
Will update with linkage when i can, kinda busy with work, if you have links to share please do,
Sent from my DROIDX using xda premium
jamesnmandy said:
Will update with linkage when i can, kinda busy with work, if you have links to share please do,
Sent from my DROIDX using xda premium
Click to expand...
Click to collapse
http://en.wikipedia.org/wiki/High-k_dielectric
http://www.chipworks.com/en/techni...resents-dualquad-core-32-nm-exynos-processor/
Sent from my SAMSUNG-SGH-I717 using xda premium
Radukk said:
http://en.wikipedia.org/wiki/High-k_dielectric
http://www.chipworks.com/en/techni...resents-dualquad-core-32-nm-exynos-processor/
Sent from my SAMSUNG-SGH-I717 using xda premium
Click to expand...
Click to collapse
Interesting reading. Thanks! :thumbup:
Sent from my GT-I9300 using xda premium
Radukk said:
http://en.wikipedia.org/wiki/High-k_dielectric
http://www.chipworks.com/en/techni...resents-dualquad-core-32-nm-exynos-processor/
Sent from my SAMSUNG-SGH-I717 using xda premium
Click to expand...
Click to collapse
For future reference I would never use wikipedia as a source of fact. Thanks for the other link, will update as soon as i can.
Sent from my DROIDX using xda premium
jamesnmandy said:
For future reference I would never use wikipedia as a source of fact. Thanks for the other link, will update as soon as i can.
Sent from my DROIDX using xda premium
Click to expand...
Click to collapse
You know, there's always the sources at the bottom of every Wikipedia article...
AndreiLux said:
You know, there's always the sources at the bottom of every Wikipedia article...
Click to expand...
Click to collapse
you are of course correct, which is why I always drill down and link to the sources not the article, just personal preference I suppose, but this isn't my idea, I think linking to wikipedia as a source of fact is generally frowned upon
no worries

What's inside iphone5?!?

I for one can't wait to hear about what Apples new A6 chip.
Anandtech originally reported that it was a A15 based dual core, which would be a major design win for Apple, since that would be what... 6 months before you seen any Android phone with a A15 SOC out in any substantial numbers!
But now Anandtech is reporting that Apple made their own CPU very closely built on the A15. Sort of a Krait on steroids, if you will.
That choice was apparently the only way they could get close to twice the performance without sacrificing battery life.
The GPU is either the same quad core SG 543 from the new iPad or a version of that chip with three cores. Either way, this means serious GPU muscle for the iPhone 5. I for one sure was blown away by the graphics in the Real Racing 3 game they demonstrated!
It's really exciting, cause it'll push development on all platforms forward. Its getting boring to always see the same 2-3 SOC combinations on Android phones (All the top phones have the same CPU/GPU inside of them these days), and will mean that Android handsets again have their work cut out for them in terms of catching up to Apple. Three of four GPU cores in a phone is crazy powerful!
What does people think powers the iPhone 5/A6? Higher clocked dual core A9? Quad core A9? Apples own custom CPU?
vszulc said:
I for one can't wait to hear about what Apples new A6 chip.
Anandtech originally reported that it was a A15 based dual core, which would be a major design win for Apple, since that would be what... 6 months before you seen any Android phone with a A15 SOC out in any substantial numbers!
But now Anandtech is reporting that Apple made their own CPU very closely built on the A15. Sort of a Krait on steroids, if you will.
That choice was apparently the only way they could get close to twice the performance without sacrificing battery life.
The GPU is either the same quad core SG 543 from the new iPad or a version of that chip with three cores. Either way, this means serious GPU muscle for the iPhone 5. I for one sure was blown away by the graphics in the Real Racing 3 game they demonstrated!
It's really exciting, cause it'll push development on all platforms forward. Its getting boring to always see the same 2-3 SOC combinations on Android phones (All the top phones have the same CPU/GPU inside of them these days), and will mean that Android handsets again have their work cut out for them in terms of catching up to Apple. Three of four GPU cores in a phone is crazy powerful!
What does people think powers the iPhone 5/A6? Higher clocked dual core A9? Quad core A9? Apples own custom CPU?
Click to expand...
Click to collapse
dualcore a15 1ghz but the real answer isTOTAL CRAP
Please use our sister site for discussing Apple products.
https://www.iphone-developers.com

NVIDIA Tegra 4 vs Nexus 10 processor

They've unveiled it today
http://www.engadget.com/2013/01/06/nvidia-tegra-4-official/
and apparently it's much powerful and faster than the eqynox on the nexus 10, but I don't know that much about this kind of tech, I'm probably finally going to buy the Nexus 10 this week if Samsung doesn't unveil a more powerful tablet, so I was wondering if this Tegra 4 processor is worth waiting for until it's implemented on a tablet.
May TEGRA 3 Rest in Peace ...
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Sent from my GT-I9100 using Tapatalk 2
Yes that thing is packing heat. Best case, the next device with a tegra 4 will come out next Christmas. Unless they've been hiding something.
cuguy said:
Yes that thing is packing heat. Best case, the next device with a tegra 4 will come out next Christmas. Unless they've been hiding something.
Click to expand...
Click to collapse
It will be out somewhere b/w June and August maybe..
It will not take that long ...
Sent from my GT-I9100 using Tapatalk 2
i think march....mark my words
Their browser test is having the Nexus 10 run Chrome while the Tegra runs AOSP. In my eyes that makes it a 100% unfair comparison.
Between bad experiences with Tegra 2 and 3 (Atrix/TF700) and their requirement that Tegra optimized games not run on other SoC vendors without any real reason other than because they can, I cant even consider a mobile Nvidia device. All they're good for is keeping the more reputable chip makers on their toes.
yes it's nice
Would be interesting to see this with both devices running the AOSP browser! From my experience it is much faster than the current chrome version (which still is version 18 on android, compared to 23 on desktop). Maybe the Tegra4 would be faster aswell, but not that much.
Everything on my N10 is extremely fast and fluid, so I wouldn't wait for whenever the first Tegra4 devices will be available. Plus its Nexus so you know what you are buying!
Jotokun said:
Their browser test is having the Nexus 10 run Chrome while the Tegra runs AOSP. In my eyes that makes it a 100% unfair comparison.
Between bad experiences with Tegra 2 and 3 (Atrix/TF700) and their requirement that Tegra optimized games not run on other SoC vendors without any real reason other than because they can, I cant even consider a mobile Nvidia device. All they're good for is keeping the more reputable chip makers on their toes.
Click to expand...
Click to collapse
Agreed, they're making an Apples and Pears comparison that was undoubtedly set to show the new processor in a good light. It's only to be expected, it is a sales pitch after all. It will no doubt be a faster chip though.
Sent from my Nexus 10 using XDA Premium HD app
I would much rather see a couple benchmark runs myself. A time comparison of a web browser is no way to test the power of a new chipset.
Still, I would expect Tegra 4 to be WAY WAY WAY more powerful than the Exynos 5250. Both devices use the A15 architecture, and Tegra 4 has twice as many CPU cores as we have. This alone is already a big boost in multithreaded apps. Then look at the GPU where you cant even compare the two at all except by end result numbers. They are just far too different. We have 4 GPU cores, Tegra 4 has 72 GPU cores. But those cores are designed far differently and not nearly as powerful per core. It is all about the companies definition of what a GPU "core" is. And then you have a smaller process node as well, which by itself already promises to use less power than the larger process node the Exynos 5250 uses.
I would honestly expect the Tegra 4 chipset to completely destroy our tablet in terms of performance, but a much more fair comparison would be to compare the Tegra 4 to the Exynos 5 quad. Those two are actually designed to compete with each other.
If you want to compare exynos and tegra 4 then wait for exynos 5450 (quad a15) which should come with galaxy s4 no of cores makes a difference here t4 is quad but early gl benchmarks show that A6X and exynos 5250 have a better GPU
First Tegra 4 Tablet running stock android 4.2:
http://www.theverge.com/2013/1/7/3845608/vizio-10-inch-tablet-combines-tegra-4-android-thin-body
Lets hope and pray Vizio managed to add microSD capability to it. Free of Nexus restraints, it should be doable, but since it is running stock JB, the storage would have to be mounted as USB (?) . So far this Vizio 10" was the most exciting Android development out of CES. We have few more our of Press events scheduled in Vegas and then it will be all over
rashid11 said:
Lets hope and pray Vizio managed to add microSD capability to it. Free of Nexus restraints, it should be doable, but since it is running stock JB, the storage would have to be mounted as USB (?) . So far this Vizio 10" was the most exciting Android development out of CES. We have few more our of Press events scheduled in Vegas and then it will be all over
Click to expand...
Click to collapse
Dont expect the Nexus advantage of up-to-date software or timely updates.
EniGmA1987 said:
I would much rather see a couple benchmark runs myself. A time comparison of a web browser is no way to test the power of a new chipset.
Still, I would expect Tegra 4 to be WAY WAY WAY more powerful than the Exynos 5250. Both devices use the A15 architecture, and Tegra 4 has twice as many CPU cores as we have. This alone is already a big boost in multithreaded apps. Then look at the GPU where you cant even compare the two at all except by end result numbers. They are just far too different. We have 4 GPU cores, Tegra 4 has 72 GPU cores. But those cores are designed far differently and not nearly as powerful per core. It is all about the companies definition of what a GPU "core" is. And then you have a smaller process node as well, which by itself already promises to use less power than the larger process node the Exynos 5250 uses.
I would honestly expect the Tegra 4 chipset to completely destroy our tablet in terms of performance, but a much more fair comparison would be to compare the Tegra 4 to the Exynos 5 quad. Those two are actually designed to compete with each other.
Click to expand...
Click to collapse
look at the new iphone only 2 cores (different architecure) beating the higher clock double cored galaxy s3 in some disciplines..
this presentation is scientifically SO SO irrelevant especially becasue they use different software.. I LOL SO HARD at ppl thinking this is anywhere near to be comparable
schnip said:
look at the new iphone only 2 cores (different architecure) beating the higher clock double cored galaxy s3 in some disciplines..
this presentation is scientifically SO SO irrelevant especially becasue they use different software.. I LOL SO HARD at ppl thinking this is anywhere near to be comparable
Click to expand...
Click to collapse
The new iPhone 5 doesnt use the same ARM architecture are the S3 though, it is a custom design. So those can be compared against each other fine to see which architecture is better. And if a slower clock speed CPU gets better scores then we know it is a superior design. This is the basis of all benchmarking. If we were only allowed to compare the exact same architectures together then we wouldnt learn anything.
Tegra4 uses a (probably slightly modified) A15 core, and the Exynos 5xxx uses a fairly stock A15 core. So a higher clocked A15 should beat out a lower clocked A15 in a direct comparison no matter what. Then when you throw 2 additional cores on top it should always win in multithreaded benchmarks too. Seems pretty common sense to me.
The main difference will be in the graphics side of things, where Nvidia has their own designed GPU compared to Samsung's use of the Mali GPU's.
You can still compare them together just fine, it just need to be both of them on the same browser if there is a browser comparison being done. In this PR release, Nvidia skewed the results like all companies do. So we cant really see the difference between the two from those pictures and we need to wait for 3rd party review sites to do proper testing to see actual results. Yet we can still estimate performance plenty fine since we have a baseline of the architecture already with this tablet.
"Tegra 4 more powerful than Nexus 10"... well duh! It's a new chip just unveiled by nvidia that won't show up in any on sale devices for at least a couple of months. Tablet and smartphone tech is moving very quickly at the moment, nvidia will hold the android performance crown for a couple of months and then someone (probably samsung or qualcomm) will come along with something even more powerful. Such is the nature of the tablet/smartphone market. People that hold off on buying because there is something better on the horizon will be waiting forever because there will always be a better device just a few months down the line!
EniGmA1987 said:
The new iPhone 5 doesnt use the same ARM architecture are the S3 though, it is a custom design. So those can be compared against each other fine to see which architecture is better. And if a slower clock speed CPU gets better scores then we know it is a superior design. This is the basis of all benchmarking. If we were only allowed to compare the exact same architectures together then we wouldnt learn anything.
Tegra4 uses a (probably slightly modified) A15 core, and the Exynos 5xxx uses a fairly stock A15 core. So a higher clocked A15 should beat out a lower clocked A15 in a direct comparison no matter what. Then when you throw 2 additional cores on top it should always win in multithreaded benchmarks too. Seems pretty common sense to me.
The main difference will be in the graphics side of things, where Nvidia has their own designed GPU compared to Samsung's use of the Mali GPU's.
You can still compare them together just fine, it just need to be both of them on the same browser if there is a browser comparison being done. In this PR release, Nvidia skewed the results like all companies do. So we cant really see the difference between the two from those pictures and we need to wait for 3rd party review sites to do proper testing to see actual results. Yet we can still estimate performance plenty fine since we have a baseline of the architecture already with this tablet.
Click to expand...
Click to collapse
That was kind of his point.
I don't think anyone is denying that the tegra will be faster. What's being disputed here is just how much faster it is. Personally, I don't think it'll be enough to notice in everyday use. Twice the cores does not automatically a faster CPU make, you need software that can properly take advantage and even then is not a huge plus in everyday tasks. Also, in the past Nvidia has made pretty crappy chips due to compromise. Good example being how the tegra 2 lacked neon support. The only concrete advantages I see are more cores and a higher clock rate.
Based on the hype : performance ratio of both Tegra 2 & 3 I wouldn't have high hopes until I see legit benchmark results.
What does seem promising though, is the fact that they are making more significant changes than from T2 to T3, such as dual channel memory (finally after 1-2 years of all other SoCs having it -.-) and the GPU cores are different too.
Still, the GPU has always been the weakest point of Tegra, so I still don't think it can beat an overclocked T-604 by much, even though this time around they will not be the first ones to debut a next-gen SoC. Given the A15 architecture they can't really screw up the CPU even if they wanted to, so that should be significantly faster than the Exynos 5 Dual.
I've also just read this article on Anandtech about power consumption and the SoC in the Nexus 10 consumes multiple times as much power as other tablet chipsets, making me wonder how nVidia plans to solve the battery life issue with 2 times as many cores and a (seemingly) beefier GPU, not even mentioning implementation in phones..
freshlysqueezed said:
First Tegra 4 Tablet running stock android 4.2:
http://www.theverge.com/2013/1/7/3845608/vizio-10-inch-tablet-combines-tegra-4-android-thin-body
Click to expand...
Click to collapse
Apparently this is the tablet that comes to take Nexus 10 spot, Vizio 10 inch tablet with Tegra 4, 2560 x 1600 resolution, 32gb storage, Android 4.2. And it should be coming out Q1 2013, I think this one makes me wait to hear some more about it until I buy the Nexus 10, although to be honest the brand is a bit of a let down for me.
Edit: the 10-inch model, key specs (aside from Tegra 4) include a 2,560 x 1,600 display, 32GB of on-board memory, NFC and dual 5MP / 1.3MP cameras.
http://www.engadget.com/2013/01/07/vizio-10-inch-tegra-4-tablet-hands-on/

Categories

Resources