[TEST] Tegra 2 OC vs Tegra 3 - Eee Pad Transformer General

Hello guys! Walking around the net I found some diagrams that describe the benchmarks of Tegra 2 stock vs. 3 relative to other processors. I always wondered if now that the processor is overclocked to 1.7GHz (70% more power) has reduced the distances from the future quad-core (which we remember to be 5 times faster than current SoC as Nvidia has said).
Some developers can make this comparison? It would be interesting to see the benefits
"when looking at the compiler versions and settings used to compile the CoreMark benchmark, the Core 2 Duo numbers were produced via GCC 3.4 and only the standard set of optimizations (-O2), while the Tegra 3 numbers were run on a more recent GCC 4.4 with aggressive optimizations (-O3). Il Sistemista website took a Core 2 Duo T7200 and re-ran the benchmark compiled with GCC 4.4 and the same optimization settings. The results were no longer in favor of NVIDIA, as the Core 2 chip scored about 15,200 points, compared to the Tegra's 11,352."
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
CoreMark benchmark comparing nVidia Tegra 3 @1GHz clock to various real and hypothetical products
CoreMark/MHz index shows how much Coremarks can a particular chip extract given its frequency

WoW....More powerful than Core2Due O_O!!!
Now THATS something to get excited about!!!

Ahmed_PDA2K said:
WoW....More powerful than Core2Due O_O!!!
Now THATS something to get excited about!!!
Click to expand...
Click to collapse
if you read the bold they fixed compiler optimizations and the Tegra did not hold up

At this point it would be really interesting to know how much you have reduced the gap with the super-overclock to 1.7GHz of Tegra (which is really the maximum achievable?). I don't know how to operate the CoreMark 1.0 so I ask someone in the community for check if values ​​are underestimated or substantially improved compared to those offered by Nvidia and especially to see if at this point Tegra 3 can really be worth .
I recently read that Nvidia is pushing a lot of their projects and have already popped the development of Tegra 4. The specifications include a 4 Tegra chip manufacturing to 28Nm. IMHO it would be more correct to wait for this to update the Tegra 2 hardware version (so these benchmarks could be a way to understand this in this context).

I highly doubt many people can get their tegra 2 up to 1.7, I can only get mine stable at 1.4, probably 1.5 and maybe 1.6 if I spent a lot of time messing with the voltages. I think I read somewhere that 1.7ghz produces a lot of heat, like almost in the danger zone but I could be exaggerating it a little.
Sent from my ADR6300 using Tapatalk

In any case it would be nice to make a comparative improvement in all frequencies compared to those values ​​offered by Nvidia, to give an idea of who has the device the hardware's potential. But I think it's a matter of development and optimization as we are seeing here on the forum these days ... the tablet is slowly improving on all fronts

I'm sure that once we have access to the OS code the tf will run like a beast! I had an OG Droid and the difference between stock and modded was mind blowing.
Sent from my ADR6300 using Tapatalk

brando56894 said:
I highly doubt many people can get their tegra 2 up to 1.7, I can only get mine stable at 1.4, probably 1.5 and maybe 1.6 if I spent a lot of time messing with the voltages. I think I read somewhere that 1.7ghz produces a lot of heat, like almost in the danger zone but I could be exaggerating it a little.
Sent from my ADR6300 using Tapatalk
Click to expand...
Click to collapse
I'm one of the few with one that does 1.7GHz no problem.

Its not only the cpu speed of tegra3 that will pwn the transformer1, a 12core gpu and support for dynamic lightning , high profile hd decoding and much more. No way a tegra2 will outperform the tegra3.
Just face it: transformer1 will be pwned big time by transformer2, launched okt/nov 2011....
Not to mention the new Ti cpus coming q1 2012.... they will pwn even more then tegra3...
Sent from my Desire HD using XDA App

Tempie007 said:
Its not only the cpu speed of tegra3 that will pwn the transformer1, a 12core gpu and support for dynamic lightning , high profile hd decoding and much more. No way a tegra2 will outperform the tegra3.
Just face it: transformer1 will be pwned big time by transformer2, launched okt/nov 2011....
Not to mention the new Ti cpus coming q1 2012.... they will pwn even more then tegra3...
Sent from my Desire HD using XDA App
Click to expand...
Click to collapse
Yeah it is true that is stronger, but my thesis is that Tegra 3 is only a technology shift. Although it has 12 computing cores capable of dynamically processing the lights, this CPU produces these values ​​and it would be interesting to relate them to the values ​​of the Tegra 2 overclocked and see if indeed the Tegra 3 can really be a solution to replace Tegra 2. We are faced with two different SoC, the first a dual core, the second a quad core, but probably, common sense tells me that if well exploited this device could give a long hard time to the next model (I remember the story of HTC HD2, which was released back in 2009 and today is one of the longest-running phones through a rearrangement of the software every day). I argue that there is no need for raw power in these devices but the fineness of the calculation, since the batteries are not as performance and putting more capacity probabily they can't make the devices more streamlined in near future.
Someone knows how to use CoreMark to update these values​​?

Of course the tegra two will beat the crap out of the tegra 3, thats like comparing a core 2 duo to a core i7 lol
chatch15117 said:
I'm one of the few with one that does 1.7GHz no problem.
Click to expand...
Click to collapse
Lucky bastard! :-D how many mV are you running it at?

Never tried 1.7Ghz but have done 1.5-1.6 on 3 tabs without messing with voltages. Acutally right now running 1.2Ghz messing with LOWERING the voltages.

Coremark download
If anyone can compile Coremark for run under Android here there is the software to download:
Download the Coremark Software readme file
Download the CoreMark Software documentation
This documentation answers all questions about porting, running and score reporting
Download the Coremark Software
Download the CoreMark Software MD5 Checksum file to verify download
Use this to verify the downloads: >md5sum -c coremark_<version>.md5
Download the CoreMark Platform Ports
These CoreMark ports are intended to be used as examples for your porting efforts. They have not been tested by EEMBC, nor do we guarantee that they will work without modifications.
Here there are result's table:
http://www.coremark.org/benchmark/index.php
If a DEV can compile and Run some test for comparing the results with the new frequencys would be great!

brando56894 said:
I highly doubt many people can get their tegra 2 up to 1.7, I can only get mine stable at 1.4, probably 1.5 and maybe 1.6 if I spent a lot of time messing with the voltages. I think I read somewhere that 1.7ghz produces a lot of heat, like almost in the danger zone but I could be exaggerating it a little.
Sent from my ADR6300 using Tapatalk
Click to expand...
Click to collapse
There is no danger zone.. at least not on my kernel. I haven't posted the one for 3.1 (only 3.01), but there is a built in down clock once the chip reaches temps beyond 47degC... This only happens if the tablet (in laptop form) is left in a bag with the screen on (by accident, sometimes the hinge says its open when its closed).. So if you can run 1.7ghz stable, theres no reason not to run it.. as you'll be protected from heat by the built in thermal throttling.
Default thermal throttling had some seriously bizzare ranges.. 80-120degC..
I wish nvidia's boards were produced like this:
http://hardkernel.com/renewal_2011/main.php
http://www.hardkernel.com/renewal_2011/products/prdt_info.php?g_code=G129705564426
Something modular would be a nice change in the mobile tech segment.. Swapping SoCs would require some serious low-level work tho.. Unless writing to the bootloader and boot partitions were made easy via external interface.. something like that.

Blades said:
There is no danger zone.. at least not on my kernel. I haven't posted the one for 3.1 (only 3.01), but there is a built in down clock once the chip reaches temps beyond 47degC... This only happens if the tablet (in laptop form) is left in a bag with the screen on (by accident, sometimes the hinge says its open when its closed).. So if you can run 1.7ghz stable, theres no reason not to run it.. as you'll be protected from heat by the built in thermal throttling.
Default thermal throttling had some seriously bizzare ranges.. 80-120degC..
I wish nvidia's boards were produced like this:
http://hardkernel.com/renewal_2011/main.php
http://www.hardkernel.com/renewal_2011/products/prdt_info.php?g_code=G129705564426
Something modular would be a nice change in the mobile tech segment.. Swapping SoCs would require some serious low-level work tho.. Unless writing to the bootloader and boot partitions were made easy via external interface.. something like that.
Click to expand...
Click to collapse
And so? are you good to use CoreMark to run some bench on your device and compare with Tegra 3 results? coz here i would test this

devilpera64 said:
"when looking at the compiler versions and settings used to compile the CoreMark benchmark, the Core 2 Duo numbers were produced via GCC 3.4 and only the standard set of optimizations (-O2), while the Tegra 3 numbers were run on a more recent GCC 4.4 with aggressive optimizations (-O3).
Click to expand...
Click to collapse
That statement there tells why benchmarks are extremely useless and misleading.

im one of the few that can reach 1.7 ghz no problem to. never ran it long enough to get hot tho. maybe 30 mins just to run benchmarks. never had fc's or have my system crash

Related

Is HummingBird Really Slower than Snapdragon Gen 2? [Independent of JIT]

I know this topic has been debates over time but I noticed that most people attributed the differences in performance is caused by firmware difference (2.1 vs. 2.2).
Today there's an article release about G2 overlock to 1.42 Ghz. Along with the article I noticed "Native Benchmark" using SetCPU which doesn't uses JIT.
Lower is Better.
G2 Result:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Now My Vibrant at 1.2 Ghz:
C: 702.89
Neon: 283.15
The difference between the two phone is so great that I doubt it is due to the 200 MHz difference alone.
As a comparison, my score at regular 1 GHz is:
C: 839.21
Neon: 334.51
There is about 130 ms decrease for 200 Mhz overclock, which is Vibrant is at 1.4 Ghz would put the two CPU really close to each other but with G2 having a slight edge. Remember this test is suppose to be JIT independent running Native Codes. But since the vibrant can only be stable overclocked to 1.3 Ghz (what is available anyways), the newer generation of Snapdragon may just be more efficient than Hummingbird, despite us the galaxy owner believes otherwise.
Another thing to keep in mind though, is that Snapdragon are supposedly to have an edge in Neon instruction Set, so I didn't look into that score too much.
It appears to be true.
It appears Hummingbird is not only slower than the new Generation Scorpions, it also appears the Hummingbird is unable to fully capture the CPU performance gain of the Dalvik JIT compiler in Froyo 2.2
http://www.youtube.com/watch?v=yAZYSVr2Bhc
Dunno Something Is Not Right About This 2.2
The Thing That Really Bugs Me Is 2.2 is Suppose To Allow The Full Functionality Of Our 512MB of Ram..But It Doesn't
Erickomen27 said:
Dunno Something Is Not Right About This 2.2
The Thing That Really Bugs Me Is 2.2 is Suppose To Allow The Full Functionality Of Our 512MB of Ram..But It Doesn't
Click to expand...
Click to collapse
It's not 2.2, its Samsung.
SamsungVibrant said:
It's not 2.2, its Samsung.
Click to expand...
Click to collapse
I agree, they should use ext 4 on their phones.
I don't see why they would stick to their old RFS.
SamsungVibrant said:
It appears Hummingbird is not only slower than the new Generation Scorpions, it also appears the Hummingbird is unable to fully capture the CPU performance gain of the Dalvik JIT compiler in Froyo 2.2
http://www.youtube.com/watch?v=yAZYSVr2Bhc
Click to expand...
Click to collapse
I'm sorry, but could you explain what your youtube link has to do with the topic? I'm curious, as I wasn't any wiser on the question at hand when I watched it.
NEON is architecture extension for the ARM Cortex™-A series processors*
Is Snapdragon an ARM Cortex™-A series processor? NO!
Remember SSE instruction set in Intel, and the war AMD vs Intel?
Welcome back, LOL
*The source for NEON: http://www.arm.com/products/processors/technologies/neon.php
Probably is, but does it really matter?
Sent from my SGS Vibrant.
Scorpion/Snapdragon have faster FPU performance due to a 128 bit SIMD FPU datapath compared to Cortex-A8's 64 bit implementation. Both FPUs process the same SIMD-style instructions, the Scorpion/snapdragon just happens to be able to do twice as much.
http://www.insidedsp.com/Articles/t...ualcomm-Reveals-Details-on-Scorpion-Core.aspx
2.2 isnt going to magically give the galaxy S similar scorpion/snapdragon high scores
just look at droidX and other Cortex-A8 phones that already have official 2.2 ROMS they avr 15-20 linpack scores
This doesn't make the hummingbird a bad CPU at all LOL its stupid benchmarks IMHO not going to show in realword use...maybe when the OS matures and becomes more complex but not now..and even by then we will have dualcore CPU's...its a gimmick for HTC to have the "Fastest CPU"
IMO in real world use they are pretty much on par but then when you look at GPU performance its quit obvious the galaxy S pulls ahead thanks to the 90mts PowerVR SGX540
demo23019 said:
Scorpion/Snapdragon have faster FPU performance due to a 128 bit SIMD FPU datapath compared to Cortex-A8's 64 bit implementation. Both FPUs process the same SIMD-style instructions, the Scorpion/snapdragon just happens to be able to do twice as much.
http://www.insidedsp.com/Articles/t...ualcomm-Reveals-Details-on-Scorpion-Core.aspx
2.2 isnt going to magically give the galaxy S similar scorpion/snapdragon high scores
just look at droidX and other Cortex-A8 phones that already have official 2.2 ROMS they avr 15-20 linpack scores
This doesn't make the hummingbird a bad CPU at all LOL its stupid benchmarks IMHO not going to show in realword use...maybe when the OS matures and becomes more complex but not now..and even by then we will have dualcore CPU's...its a gimmick for HTC to have the "Fastest CPU"
IMO in real world use they are pretty much on par but then when you look at GPU performance its quit obvious the galaxy S pulls ahead thanks to the 90mts PowerVR SGX540
Click to expand...
Click to collapse
Once again quoting ARM HQ website:
NEON technology is cleanly architected and works seamlessly with its own independent pipeline and register file.
NEON technology is a 128 bit SIMD (Single Instruction, Multiple Data) architecture extension for the ARM Cortex™-A series processors, designed to provide flexible and powerful acceleration for consumer multimedia applications, delivering a significantly enhanced user experience. It has 32 registers, 64-bits wide (dual view as 16 registers, 128-bits wide.
Click to expand...
Click to collapse
Scorpion is not ARM Cortex™-A series processor
Fuskand said:
I'm sorry, but could you explain what your youtube link has to do with the topic? I'm curious, as I wasn't any wiser on the question at hand when I watched it.
Click to expand...
Click to collapse
I provided the link, because the first part of the link talks about the JIT compiler which increases CPU performance. I put that there in-case someone has never heard of this before. Thus, when I mentioned the Hummingbird can not take full advantage of the JIT compiler, someone would know what I'm talking about.
demo23019 said:
Scorpion/Snapdragon have faster FPU performance due to a 128 bit SIMD FPU datapath compared to Cortex-A8's 64 bit implementation. Both FPUs process the same SIMD-style instructions, the Scorpion/snapdragon just happens to be able to do twice as much.
http://www.insidedsp.com/Articles/t...ualcomm-Reveals-Details-on-Scorpion-Core.aspx
2.2 isnt going to magically give the galaxy S similar scorpion/snapdragon high scores
just look at droidX and other Cortex-A8 phones that already have official 2.2 ROMS they avr 15-20 linpack scores
This doesn't make the hummingbird a bad CPU at all LOL its stupid benchmarks IMHO not going to show in realword use...maybe when the OS matures and becomes more complex but not now..and even by then we will have dualcore CPU's...its a gimmick for HTC to have the "Fastest CPU"
IMO in real world use they are pretty much on par but then when you look at GPU performance its quit obvious the galaxy S pulls ahead thanks to the 90mts PowerVR SGX540
Click to expand...
Click to collapse
Search the net, people have made real world Videos of galaxy s running 2.2, compared to G2. The G2 is faster in the real world on things like launching aps.
lqaddict said:
Once again quoting ARM HQ website:
Scorpion is not ARM Cortex™-A series processor
Click to expand...
Click to collapse
LOL i never said the scorpion is ARM Cortex™-A
try reading my post again
SamsungVibrant said:
Search the net, people have made real world Videos of galaxy s running 2.2, compared to G2. The G2 is faster in the real world on things like launching aps.
Click to expand...
Click to collapse
LOl if it is faster it might be by the most 1-2 sec if its lucky
sorry its going to take allot more than that to impress me..again its a phone now a highend PC
SamsungVibrant said:
Search the net, people have made real world Videos of galaxy s running 2.2, compared to G2. The G2 is faster in the real world on things like launching aps.
Click to expand...
Click to collapse
Due to different filesystem implementation largely, once there is a workable hack to convert the entire filesystem on the Galaxy S to a real filesystem you can make the comparison of the things like launching apps.
Demo, I didn't mean to come off as a **** I was just pointing out the flaw in the OP benchmark - NEON instruction set execution is flawed. G2 processor is ARMv7 which is the base of Cortex-A8, Cortex-A8 adds the instructions specifically targeted for application, like multimedia, and that's where NEON comes into place.
lqaddict said:
Due to different filesystem implementation largely, once there is a workable hack to convert the entire filesystem on the Galaxy S to a real filesystem you can make the comparison of the things like launching apps.
Click to expand...
Click to collapse
Agreed. +10 char
lqaddict said:
Demo, I didn't mean to come off as a **** I was just pointing out the flaw in the OP benchmark - NEON instruction set execution is flawed. G2 processor is ARMv7 which is the base of Cortex-A8, Cortex-A8 adds the instructions specifically targeted for application, like multimedia, and that's where NEON comes into place.
Click to expand...
Click to collapse
No problem I didn't really take it
Also noticed i overlooks allot of things in OP...blame the ADD
What difference does it make? In real world use the difference is negligible. And in three months our phones will be mid-tier anyway. At least we don't have hinges that will fall apart in two months.
how is the hummingbird not able to fully take advantage of JIT?
Well there is a fix for our phones now. And from what I can tell there no way the g2 can open apps faster than my vibrant with the z4mod. Its smocking fast.by far the fastest I've ever seen this phone. No delays whatsoever. Can't wait till I get froyo with ocuv and this will be unreal. I feel like this phone us a high end pc running android or something. When I say instant it's instant lol.
Kubernetes said:
What difference does it make? In real world use the difference is negligible. And in three months our phones will be mid-tier anyway. At least we don't have hinges that will fall apart in two months.
Click to expand...
Click to collapse
Exactly.
People seem to forget that the SGS line is like 6 months old now, we should be glad they're still doing as well as they are.
Then theres the fact that there aren't many other phones that come with 16gb internal. To me, having 16GB and being able to upgrade to 48 [minus the 2GB that Samsung steals] total is worth way more than starting with 4GB [1.5GB usable] and upgrading to a max of 36 [minus what HTC steals from internal].
But, if you don't like your phone, SELL IT while it's still worth something!

Plp are saying the Nexus 7 processor is faster then then the 10. This true?

Thx for any feedback
Sent from my SGH-I747 using xda app-developers app
Who's saying that? Lol. Must be mis-informed
Sent from my EVO using Tapatalk 2
Nexus 7: 1.2GHz quad core, A9 based
Nexus 10: 1.7GHz dual core, A15 based
It's probably quite difficult to find a use case on a tablet where more than two cores provides any tangible performance increase, but the much higher clock rate and newer architecture should definitely make a difference.
Sent from my HTC One X using xda app-developers app
Benchmarks shows that the new dual core is much faster than the n7 quad. Also the 2gb of ram
I suspect it's mostly because "Plp" don't understand how 2 cores can be faster than 4 cores.
Yes, they can and yes, they are. ARM15 CPUs have a new and vastly superior architecture.
It's top notch. Although core counts are great for marketing, don't get to worried. After people get their hands on it, we will have a better idea of performance. I remain confident I'm about to purchase the most powerful tablet available.
Sent from my HTC One S using Tapatalk 2
Biohazard0289 said:
It's top notch. Although core counts are great for marketing, don't get to worried. After people get their hands on it, we will have a better idea of performance. I remain confident I'm about to purchase the most powerful tablet available.
Sent from my HTC One S using Tapatalk 2
Click to expand...
Click to collapse
Technically that's not true, the latest iPad, I believe, vastly out performs this thing. That said, how is it gonna make use of all that power other than playing games, which let's face it, that's all iPad buyers do really...
Not like it'll be cracking aircrack dumps eh.
Sent from my GT-I9300 using Tapatalk 2
I saw a benchmark that placed the nexus 10 under the transformer TF300 (antutu). Would this be accurate? I would've thought that the nexus10 would've outperformed the transformer which has a quad core tegra 3 processor.
Sent from my ASUS Transformer Pad TF300T using xda app-developers app
UndisputedGuy said:
I saw a benchmark that placed the nexus 10 under the transformer TF300 (antutu). Would this be accurate? I would've thought that the nexus10 would've outperformed the transformer which has a quad core tegra 3 processor.
Sent from my ASUS Transformer Pad TF300T using xda app-developers app
Click to expand...
Click to collapse
Saw that too, as well benchmarks in which the note 10.1 is having a better performance than the Nexus 10
http://www.engadget.com/2012/11/02/nexus-10-review/
Funnily other sites really do say the Nexus 10 is the fastest they have seen so far.
4z01235 said:
Nexus 7: 1.2GHz quad core, A9 based
Nexus 10: 1.7GHz dual core, A15 based
Click to expand...
Click to collapse
4 * 1.2 GHz = 4.8 GHz
2 * 1.7 GHz = 3.4 GHz
Clearly, the Nexus 7 is much faster.
That is the discussion level when people come to the conclusion stated in the topic. Even in benchmarks - which all of the reviewers agree don't mirror the level of performance they experience in real world use - the Nexus 10 comes out way higher than the Nexus 7 while pushing 4 times the pixels! The Nexus 10 runs circles around its little brother...
The differences are so vast, it is almost incomprehensible to some. I'll use my same reference as I do with desktop processors. When quad cores really started coming around and making big appearances; software and games weren't exactly capable of utilizing the abilities. So, an e8400 3.0GHz dual core would out perform it's bigger quad core brother that ran at the same frequency and architecture in a lot of applications. Back in 2008, gaming rigs almost always would sport the dual core processors. Simple fact is that those processors use cache that is on the die. 6mb for two cores is better than 8mb for four cores.
So, until software developers really have a need for a quad core, the dual cores will run with the quad cores just fine. Benchmarking however, almost never show any real world performance. I'm sure this dual core is going to surprise even the more skeptical guys and gals.
Sent from the Blue Kuban on my Epic 4G Touch
Maxey said:
Saw that too, as well benchmarks in which the note 10.1 is having a better performance than the Nexus 10
http://www.engadget.com/2012/11/02/nexus-10-review/
Funnily other sites really do say the Nexus 10 is the fastest they have seen so far.
Click to expand...
Click to collapse
The benchmarks are selling the N10 very, very short. There is a great review linked in this post where the author says exactly that.
Here's the post that stamps its feet and declares the absolute truth:
Cores aren't everything.
And you can think that we're just a bunch of fanboys trying to justify the use of dual core in the Nexus 10, but even non-fanboys who know how these processors work would tell you a dual core A15 is far more powerful than a quad core A9.
Now everything below this post is purely observation based (I'm not an engineer nor have I really studied up on computer parts), but I think it gives a casual user a good idea of how a CPU works.
So, before you go out screaming "wtf. why are there only 2 cores in the nexus 10. lol so noob," ask yourself this: what do extra cores even do? If you can't answer that, then don't complain because you clearly don't know what you're talking about. It's not simple multiplication of the frequency (for example, 4 x 1.2ghz = 4.8ghz). A 4.8ghz phone would be able to run Crysis without a problem (of course with a dedicated GPU, but that's besides the point). That's obviously not the case.
The major difference between the A15 and the A9 is their microarchitecture. You can think of it this way. A mouse has to go through a course. There are two courses:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
and
.
We call the maze A9 and we call the straight route A15.
Now, we will pretend "ghz" is a measure of intelligence (higher the better). A mouse that has an intuition level of 2.0ghz has to go through maze A9. Complicated right? It'd take awhile even if the mouse was kinda smart. But now a mouse that has an intuition level of 1.7 ghz has to go through maze A15. Easy. The maze route is a straight line - who wouldn't be able to find the end? So in the end, we ask ourselves, "Who gets the job done faster?" Obviously, the mouse in the A15 does even though it's not as smart. In this way, the A15 is just more efficient. So clock speeds (which are things like 1.0 ghz, 2.0 ghz, 1.5 ghz, 1.8 ghz) are only a part of the story. You need to factor in the microarchitecture (or in this example, the way the maze is organized).
Now, we go into cores. We can think of cores as splitting the work. Now, we will consider this scenario: we want to calculate how much time it takes for the maze to be completed four times (regardless of how many mice you are using - we just want to complete this maze 4x). A quadcore CPU that contains the 2.0ghz mouse can be represented by 4 mazes:
The dualcore CPU that contains the 1.7 ghz mouse can be represented by 2 mazes.
.
With the quadcore CPU, we will finish the task in only one go. Just put a mouse in each maze, and once they're done, we've completed the maze four times. With the dualcore CPU, we will finish the task in two go's. The mouse in each maze will have to go through each maze twice to finish the maze 4x. However, let's look at the "microarchitecture," or the maze route. Even though the dual core CPU (A15) needs to finish this task in two go's, it'll still do it a lot faster, because the route is far easier to go through. This makes the A15 more powerful. You can complete tasks quickly.
So when judging the "power" of SOCs, you need to keep three things in mind: cores, clock speed, and microarchitecture.
Clock speed = frequency such as 1.5ghz
Cores = dual core, quad core
Microarchitecture = A9, A15
Sometimes the microarchitecture won't be a vast enough improvement to justify a seriously lopsided clock speed. For example, a 4.5 ghz Intel Sandy Bridge CPU will be tons faster than a 3.2 ghz Intel Ivy Bridge CPU even though the Ivy Bridge CPU has a new microarchitecture. But, an Intel Sandy Bridge CPU clocked at 4.5 ghz will be slower than an Ivy Bridge CPU clocked at 4.4 ghz because the microarchitecture is slightly better.
Anyway, I hope this clears things up. I know the information here is probably not 100% accurate, but I'm hoping this is easier to understand than pure numbers and technical talk.
Where are the idiots that keep saying this? Honestly, I'm growing tired
Think about it, do you really think the quad core in your tablet is faster than say, a dual core core i5 in a laptop?
NO
Maybe this explains to you how a new processor architecture (A15), even at a dual core, can be faster than a quad core on the old architecture
Good explanation @404 !
What I'm worried about is whether the processor can handle that screen. I wonder if its possible to overclock it?

NVIDIA Tegra 4 vs Nexus 10 processor

They've unveiled it today
http://www.engadget.com/2013/01/06/nvidia-tegra-4-official/
and apparently it's much powerful and faster than the eqynox on the nexus 10, but I don't know that much about this kind of tech, I'm probably finally going to buy the Nexus 10 this week if Samsung doesn't unveil a more powerful tablet, so I was wondering if this Tegra 4 processor is worth waiting for until it's implemented on a tablet.
May TEGRA 3 Rest in Peace ...
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Sent from my GT-I9100 using Tapatalk 2
Yes that thing is packing heat. Best case, the next device with a tegra 4 will come out next Christmas. Unless they've been hiding something.
cuguy said:
Yes that thing is packing heat. Best case, the next device with a tegra 4 will come out next Christmas. Unless they've been hiding something.
Click to expand...
Click to collapse
It will be out somewhere b/w June and August maybe..
It will not take that long ...
Sent from my GT-I9100 using Tapatalk 2
i think march....mark my words
Their browser test is having the Nexus 10 run Chrome while the Tegra runs AOSP. In my eyes that makes it a 100% unfair comparison.
Between bad experiences with Tegra 2 and 3 (Atrix/TF700) and their requirement that Tegra optimized games not run on other SoC vendors without any real reason other than because they can, I cant even consider a mobile Nvidia device. All they're good for is keeping the more reputable chip makers on their toes.
yes it's nice
Would be interesting to see this with both devices running the AOSP browser! From my experience it is much faster than the current chrome version (which still is version 18 on android, compared to 23 on desktop). Maybe the Tegra4 would be faster aswell, but not that much.
Everything on my N10 is extremely fast and fluid, so I wouldn't wait for whenever the first Tegra4 devices will be available. Plus its Nexus so you know what you are buying!
Jotokun said:
Their browser test is having the Nexus 10 run Chrome while the Tegra runs AOSP. In my eyes that makes it a 100% unfair comparison.
Between bad experiences with Tegra 2 and 3 (Atrix/TF700) and their requirement that Tegra optimized games not run on other SoC vendors without any real reason other than because they can, I cant even consider a mobile Nvidia device. All they're good for is keeping the more reputable chip makers on their toes.
Click to expand...
Click to collapse
Agreed, they're making an Apples and Pears comparison that was undoubtedly set to show the new processor in a good light. It's only to be expected, it is a sales pitch after all. It will no doubt be a faster chip though.
Sent from my Nexus 10 using XDA Premium HD app
I would much rather see a couple benchmark runs myself. A time comparison of a web browser is no way to test the power of a new chipset.
Still, I would expect Tegra 4 to be WAY WAY WAY more powerful than the Exynos 5250. Both devices use the A15 architecture, and Tegra 4 has twice as many CPU cores as we have. This alone is already a big boost in multithreaded apps. Then look at the GPU where you cant even compare the two at all except by end result numbers. They are just far too different. We have 4 GPU cores, Tegra 4 has 72 GPU cores. But those cores are designed far differently and not nearly as powerful per core. It is all about the companies definition of what a GPU "core" is. And then you have a smaller process node as well, which by itself already promises to use less power than the larger process node the Exynos 5250 uses.
I would honestly expect the Tegra 4 chipset to completely destroy our tablet in terms of performance, but a much more fair comparison would be to compare the Tegra 4 to the Exynos 5 quad. Those two are actually designed to compete with each other.
If you want to compare exynos and tegra 4 then wait for exynos 5450 (quad a15) which should come with galaxy s4 no of cores makes a difference here t4 is quad but early gl benchmarks show that A6X and exynos 5250 have a better GPU
First Tegra 4 Tablet running stock android 4.2:
http://www.theverge.com/2013/1/7/3845608/vizio-10-inch-tablet-combines-tegra-4-android-thin-body
Lets hope and pray Vizio managed to add microSD capability to it. Free of Nexus restraints, it should be doable, but since it is running stock JB, the storage would have to be mounted as USB (?) . So far this Vizio 10" was the most exciting Android development out of CES. We have few more our of Press events scheduled in Vegas and then it will be all over
rashid11 said:
Lets hope and pray Vizio managed to add microSD capability to it. Free of Nexus restraints, it should be doable, but since it is running stock JB, the storage would have to be mounted as USB (?) . So far this Vizio 10" was the most exciting Android development out of CES. We have few more our of Press events scheduled in Vegas and then it will be all over
Click to expand...
Click to collapse
Dont expect the Nexus advantage of up-to-date software or timely updates.
EniGmA1987 said:
I would much rather see a couple benchmark runs myself. A time comparison of a web browser is no way to test the power of a new chipset.
Still, I would expect Tegra 4 to be WAY WAY WAY more powerful than the Exynos 5250. Both devices use the A15 architecture, and Tegra 4 has twice as many CPU cores as we have. This alone is already a big boost in multithreaded apps. Then look at the GPU where you cant even compare the two at all except by end result numbers. They are just far too different. We have 4 GPU cores, Tegra 4 has 72 GPU cores. But those cores are designed far differently and not nearly as powerful per core. It is all about the companies definition of what a GPU "core" is. And then you have a smaller process node as well, which by itself already promises to use less power than the larger process node the Exynos 5250 uses.
I would honestly expect the Tegra 4 chipset to completely destroy our tablet in terms of performance, but a much more fair comparison would be to compare the Tegra 4 to the Exynos 5 quad. Those two are actually designed to compete with each other.
Click to expand...
Click to collapse
look at the new iphone only 2 cores (different architecure) beating the higher clock double cored galaxy s3 in some disciplines..
this presentation is scientifically SO SO irrelevant especially becasue they use different software.. I LOL SO HARD at ppl thinking this is anywhere near to be comparable
schnip said:
look at the new iphone only 2 cores (different architecure) beating the higher clock double cored galaxy s3 in some disciplines..
this presentation is scientifically SO SO irrelevant especially becasue they use different software.. I LOL SO HARD at ppl thinking this is anywhere near to be comparable
Click to expand...
Click to collapse
The new iPhone 5 doesnt use the same ARM architecture are the S3 though, it is a custom design. So those can be compared against each other fine to see which architecture is better. And if a slower clock speed CPU gets better scores then we know it is a superior design. This is the basis of all benchmarking. If we were only allowed to compare the exact same architectures together then we wouldnt learn anything.
Tegra4 uses a (probably slightly modified) A15 core, and the Exynos 5xxx uses a fairly stock A15 core. So a higher clocked A15 should beat out a lower clocked A15 in a direct comparison no matter what. Then when you throw 2 additional cores on top it should always win in multithreaded benchmarks too. Seems pretty common sense to me.
The main difference will be in the graphics side of things, where Nvidia has their own designed GPU compared to Samsung's use of the Mali GPU's.
You can still compare them together just fine, it just need to be both of them on the same browser if there is a browser comparison being done. In this PR release, Nvidia skewed the results like all companies do. So we cant really see the difference between the two from those pictures and we need to wait for 3rd party review sites to do proper testing to see actual results. Yet we can still estimate performance plenty fine since we have a baseline of the architecture already with this tablet.
"Tegra 4 more powerful than Nexus 10"... well duh! It's a new chip just unveiled by nvidia that won't show up in any on sale devices for at least a couple of months. Tablet and smartphone tech is moving very quickly at the moment, nvidia will hold the android performance crown for a couple of months and then someone (probably samsung or qualcomm) will come along with something even more powerful. Such is the nature of the tablet/smartphone market. People that hold off on buying because there is something better on the horizon will be waiting forever because there will always be a better device just a few months down the line!
EniGmA1987 said:
The new iPhone 5 doesnt use the same ARM architecture are the S3 though, it is a custom design. So those can be compared against each other fine to see which architecture is better. And if a slower clock speed CPU gets better scores then we know it is a superior design. This is the basis of all benchmarking. If we were only allowed to compare the exact same architectures together then we wouldnt learn anything.
Tegra4 uses a (probably slightly modified) A15 core, and the Exynos 5xxx uses a fairly stock A15 core. So a higher clocked A15 should beat out a lower clocked A15 in a direct comparison no matter what. Then when you throw 2 additional cores on top it should always win in multithreaded benchmarks too. Seems pretty common sense to me.
The main difference will be in the graphics side of things, where Nvidia has their own designed GPU compared to Samsung's use of the Mali GPU's.
You can still compare them together just fine, it just need to be both of them on the same browser if there is a browser comparison being done. In this PR release, Nvidia skewed the results like all companies do. So we cant really see the difference between the two from those pictures and we need to wait for 3rd party review sites to do proper testing to see actual results. Yet we can still estimate performance plenty fine since we have a baseline of the architecture already with this tablet.
Click to expand...
Click to collapse
That was kind of his point.
I don't think anyone is denying that the tegra will be faster. What's being disputed here is just how much faster it is. Personally, I don't think it'll be enough to notice in everyday use. Twice the cores does not automatically a faster CPU make, you need software that can properly take advantage and even then is not a huge plus in everyday tasks. Also, in the past Nvidia has made pretty crappy chips due to compromise. Good example being how the tegra 2 lacked neon support. The only concrete advantages I see are more cores and a higher clock rate.
Based on the hype : performance ratio of both Tegra 2 & 3 I wouldn't have high hopes until I see legit benchmark results.
What does seem promising though, is the fact that they are making more significant changes than from T2 to T3, such as dual channel memory (finally after 1-2 years of all other SoCs having it -.-) and the GPU cores are different too.
Still, the GPU has always been the weakest point of Tegra, so I still don't think it can beat an overclocked T-604 by much, even though this time around they will not be the first ones to debut a next-gen SoC. Given the A15 architecture they can't really screw up the CPU even if they wanted to, so that should be significantly faster than the Exynos 5 Dual.
I've also just read this article on Anandtech about power consumption and the SoC in the Nexus 10 consumes multiple times as much power as other tablet chipsets, making me wonder how nVidia plans to solve the battery life issue with 2 times as many cores and a (seemingly) beefier GPU, not even mentioning implementation in phones..
freshlysqueezed said:
First Tegra 4 Tablet running stock android 4.2:
http://www.theverge.com/2013/1/7/3845608/vizio-10-inch-tablet-combines-tegra-4-android-thin-body
Click to expand...
Click to collapse
Apparently this is the tablet that comes to take Nexus 10 spot, Vizio 10 inch tablet with Tegra 4, 2560 x 1600 resolution, 32gb storage, Android 4.2. And it should be coming out Q1 2013, I think this one makes me wait to hear some more about it until I buy the Nexus 10, although to be honest the brand is a bit of a let down for me.
Edit: the 10-inch model, key specs (aside from Tegra 4) include a 2,560 x 1,600 display, 32GB of on-board memory, NFC and dual 5MP / 1.3MP cameras.
http://www.engadget.com/2013/01/07/vizio-10-inch-tegra-4-tablet-hands-on/

GLBenchmark HTC ONE only 34 FPS @ Egypt HD 1080P Offscreen

HTC ONE GLBenchmark only scores 34 FPS at 1080P offscreen, this is much lower than the Samsung SHV-E300s which scores 41.3 FPS, both using Snapdragon 600, in the same test. IRC HTC One is using LPDDR2 RAM, so are we seeing a lack of bandwidth compared to the Samsung which may use LPDDR3, which is supported by the S600.
http://www.glbenchmark.com/phonedetails.jsp?benchmark=glpro25&D=HTC+One
how do you know HTC One uses LPDDR2 memory
kultus said:
how do you know HTC One uses LPDDR2 memory
Click to expand...
Click to collapse
http://www.htc.com/uk/smartphones/htc-one/#specs
http://www.anandtech.com/show/6754/hands-on-with-the-htc-one-formerly-m7/2
Turbotab said:
HTC ONE GLBenchmark only scores 34 FPS at 1080P offscreen, this is much lower than the Samsung SHV-E300s which scores 41.3 FPS, both using Snapdragon 600, in the same test. IRC HTC One is using LPDDR2 RAM, so are we seeing a lack of bandwidth compared to the Samsung which may use LPDDR3, which is supported by the S600.
http://www.glbenchmark.com/phonedetails.jsp?benchmark=glpro25&D=HTC+One
Click to expand...
Click to collapse
My first question would be is how they even got a benchmark of the SHV-E300?
Xistance said:
My first question would be is how they even got a benchmark of the SHV-E300?
Click to expand...
Click to collapse
How do any results appear on GLbenchmark?
I believe with GLBenchmark, that if you don't register / login before running the test, it automatically uploads to their server for public viewing, so maybe it was done intentionally, or somebody forgot to login?
fp581 said:
he is spamming all around the htc one just look at his posts plz ban him from posting in any htc forum ever again.
he probably works in sony nokia or samsung
Click to expand...
Click to collapse
Who are you talking about?
sorry wrong person i'll delete that lest one.
but i would love pics of that benchmark for proof
fp581 said:
sorry wrong person i'll delete that lest one.
but i would love pics of that benchmark for proof
Click to expand...
Click to collapse
Dude I was going to go atomic, I admit it I have a terrible temper
I believe the benchmark was run by a German Android site, called Android Next, there is a video on Youtube, the GLBenchmark starts at 2.22
http://www.youtube.com/watch?v=Wl1dmNhhcXs&list=UUan0vBtcwISsThTNo2uZxSQ&index=1
thanks turbo for advanced my knoledge...what a shame they didnt choose LPDDR3 but i think its nt issue these days
Just to temper this news, we must remeber that the HTC One is running at 1.7ghz, while the Samsung device is running at 1.9.
Although 200mhz does not seem like much, it could possibly account for the 7 fps difference when u factor in the difference in UI.
If in fact the Samsung device really has DDR3 ram, and the difference (after accounting for clock speed) is 2-3 fps, I can understand why HTC opted not to include it. Was not worth the extra cost most likely.
Maedhros said:
Just to temper this news, we must remeber that the HTC One is running at 1.7ghz, while the Samsung device is running at 1.9.
Although 200mhz does not seem like much, it could possibly account for the 7 fps difference when u factor in the difference in UI.
If in fact the Samsung device really has DDR3 ram, and the difference (after accounting for clock speed) is 2-3 fps, I can understand why HTC opted not to include it. Was not worth the extra cost most likely.
Click to expand...
Click to collapse
GLBenchmark is a test of GPU performance, and isn't really changed by CPU clockkspeed, but it is affected by bandwidth.
As a test, I downclocked my Nexus 7 from an overclocked 1.6 GHz to just 1.15 GHz, I ran GLBench and got 10 FPS. I then ran at the test again but with CPU at 1.6 GHz, the result, 10 FPS again.
I've benched the N7 with both CPU & GPU overclocked to the same level as Transformer Infinity, which gets 13 FPS, but I always get 10 FPS, the reason my N7 has lower memory bandwidth than the Transformer Infinity, because it use slower RAM and thus has less bandwidth. That is a difference of 30% in FPS, just because of lower bandwidth.
I read that LPDDR3 starts at 800 MHz or 12.8 GB/s in dual-channel configuration, whereas LPDDR2 maxs at 533 MHz or 8.5 GB/s max bandwidth in dual-channel configuration.
Turbotab said:
GLBenchmark is a test of GPU performance, and isn't really changed by CPU clockkspeed, but it is affected by bandwidth.
As a test, I downclocked my Nexus 7 from an overclocked 1.6 GHz to just 1.15 GHz, I ran GLBench and got 10 FPS. I then ran at the test again but with CPU at 1.6 GHz, the result, 10 FPS again.
I've benched the N7 with both CPU & GPU overclocked to the same level as Transformer Infinity, which gets 13 FPS, but I always get 10 FPS, the reason my N7 has lower memory bandwidth than the Transformer Infinity, because it use slower RAM and thus has less bandwidth. That is a difference of 30% in FPS, just because of lower bandwidth.
I read that LPDDR3 starts at 800 MHz or 12.8 GB/s in dual-channel configuration, whereas LPDDR2 maxs at 533 MHz or 8.5 GB/s max bandwidth in dual-channel configuration.
Click to expand...
Click to collapse
In that case the results are quite disappointing.
All these fantastic new phones, and so much disappointment.
Sent from my GT-I9300 using xda premium
Tomatoes8 said:
They could have used faster memory for the same price if they didn't cut off Samsung as a supplier. Makes you wonder where their priorities lie. Making the best products possible or just going with the motions.
Click to expand...
Click to collapse
No one is going to take anything you say here seriously, as you've managed to have 2 threads closed in the last 30 mins. One of those inane posts you made involved you saying that HTC is going to be paying, according to your genius calculation, 20% of their profits to Apple (I forget what insanely unintelligent reason you gave). Yeah, because being able to completely migrate data from 1 completely different phone to another is such a bad idea for a company that wants to push their product.
So, what is the per unit cost of what HTC is paying for RAM now vs. what they could have gotten from Samsung? Exactly, you have no idea. I also didn't hear anything about HTC "cutting off" Samsung as a supplier, but maybe I missed it, so I google'd "htc cut off samsung supplier" and found 2 links...
http://tech2.in.com/news/smartphones/following-apple-htc-cuts-component-orders-from-samsung/505402
http://www.digitimes.com/news/a20121009PD213.html
I'm not sure if you have the capability of reading or not, but I'll spoon feed you this information, ok hunny? I've taken the info from the 1st link, since there is more there.
After Apple Inc slashed its orders for memory chips for its new iPhone from major supplier and competitor, Samsung Electronics Co Ltd, HTC too has reportedly cut down on its smartphone component orders from the South Korean company.
Click to expand...
Click to collapse
So, Apple cut down on memory orders. You know, they are the one's who make the iPhone? Have a logo of an Apple on their products? Steve Jobs was the CEO before he died. Anyway, I'll continue...
According to a report by DigiTimes, HTC has reduced its orders from Samsung, and instead opted to order CMOS image sensors from OmniVision and Sony. The company has also chosen to move part of its AMOLED panel orders to AU Optronics, DigiTimes reported citing ‘sources’.
Click to expand...
Click to collapse
Notice it said that HTC reduced its orders from Samsung, specifically on the image sensors (that's for the camera, if you didn't know) and the screen. You know, the thing on the front of your phone that you touch to make it do things? You know what I mean, right? I encourage you to read this link (or possibly have someone read it to you)...
http://dictionary.reference.com/browse/reduce
The point is that reduce isn't the same as cut off. Cutting off would require HTC not ordering ANYTHING from Samsung. Guess what? The One doesn't use an OmniVision CMOS sensor (don't forget, that's what the camera uses) or an AMOLED screen (the bright part of your phone that shows you info).
Also, this is a far better designed phone, especially in regards to hardware, than anything Samsung has ever produced. I went back to my EVO 4G LTE, mainly because I couldn't stand the terrible build quality of the Note 2. It just feels like a cheap toy. And, IMO, Sense is far better than TW. Samsung may have the market right now because of the Galaxy line of products, but that doesn't mean that HTC is out of the game by any means.
Seriously, attempt to use just a bit of intelligence before opening your mouth and spewing diarrhea throughout the One forums. As the saying goes: "it's better to keep your mouth shut and have people think you're an idiot, then to open your mouth and prove it". Unfortunately for you, it's too late.
I really think Turbo was too hasty to open a new thread for this as we've been discussing this in the mega thread
http://www.glbenchmark.com/phonedetails.jsp?benchmark=glpro25&D=HTC+One
It scores 34fps in Egypt HD 1080p offscreen, while the leaked Samsung s600 device socres 41fps which is perfectly inline with Qualcomm's promised speed (3x Adreno 225)
here is a video of what I suspect the source of the benchmark, because we had no benchmark before it
http://www.youtube.com/watch?v=Wl1dmNhhcXs
notice how the battery is almost at end (HTC bar at this stage means its in the last 25%) also notice the activity in the notification area
more important the post ran more than a few full benchmarks, like quadrant before running GL benchmark, this alone is enough to lower the score, especially since Adreno 320 was known to throttle in the Nexus 4
I think benchmarks scores should not be relied on in such events, especially with hundreds of hands messing with the device, we have learned from the One X launch where videos poped up showing horrible performance from the One X, eventually turned out to be were very far from the final device in ur hands
finally both the One X and Nexus 7 at the same gpu clock, but the first is DDR2 and the second is DDR3, score the same in GL Benchmark
in other words its worrying but it's best to wait for proper testers like Anand
Thread cleaned
...from some serious trolling. There should be no trace from him for some time .
but remember:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
But...
I just wonder that a Samsung phone uses high end parts from Qualcomm instead of Samsungs processors. But I am not in Samsung devices so far, so I would not judge this
Gz
Eddi
Here's a second video also showing Egypt off screen bench at 34FPS.
https://www.youtube.com/watch?v=wijp79uCwFg
Skip to 3:30
Maedhros said:
Just to temper this news, we must remeber that the HTC One is running at 1.7ghz, while the Samsung device is running at 1.9.
Although 200mhz does not seem like much, it could possibly account for the 7 fps difference when u factor in the difference in UI.
If in fact the Samsung device really has DDR3 ram, and the difference (after accounting for clock speed) is 2-3 fps, I can understand why HTC opted not to include it. Was not worth the extra cost most likely.
Click to expand...
Click to collapse
So you're saying that 200mhz o the CPU can account for 7 fps on a GPU test?
Following what you said, the Nexus 4 should have scored 27 fps? Since it has 200mhz less...
But no, it scored 33.7...only 0.3 fps less than the One!
And you know why? First both use the same GPU (and it's what counts for a graphic test) and second the HTC phones are always slower due to Sense!
So stop *****ing and realize that the One is no god phone
Samsung device is running 4.2.1

Octacore?! Only 4 cores work... why?

Hi everyone!
I dont know if anyone noticed yet but, someone using honor 4x (which uses the same chipset) as made this question.
Our chipset is octacore indeed but no matter what app you are using, any benchmark you perform, anything you do, only 4 cores work. Why is this so?
Then someone says that they only activate when using heavy stuff that requests much work from cpu... thats a lie... when you perform any kind of benchmark or heavy load app which should test your cpu to the limit it does indeed show 8 cores, but 4 of them are offline/sleep and dont get a speed reading. I know that probably it does not make much of a difference but we pay for an 8 core and only get 4 of them to work.
Does anyone know how to get them to work?
Thanks in advance
siul_lasc said:
Hi everyone!
I dont know if anyone noticed yet but, someone using honor 4x (which uses the same chipset) as made this question.
Our chipset is octacore indeed but no matter what app you are using, any benchmark you perform, anything you do, only 4 cores work. Why is this so?
Then someone says that they only activate when using heavy stuff that requests much work from cpu... thats a lie... when you perform any kind of benchmark or heavy load app which should test your cpu to the limit it does indeed show 8 cores, but 4 of them are offline/sleep and dont get a speed reading. I know that probably it does not make much of a difference but we pay for an 8 core and only get 4 of them to work.
Does anyone know how to get them to work?
Thanks in advance
Click to expand...
Click to collapse
>P8 Lite General
>>Honor 4x
Facepalm, but ok
Other core's sleep because this is for economy power(stock emui feature), wake up only when necessary, while multi-tasking. Other question?
Rerikh's got the point, however, it should show up in the benchmarking apps as Offline.
Also there's a difference between versions of the phone.
There's one called ALE-L04, theres our ALE-L21, and I heard about ALE-L23 too.
The ALE-L04 has a different chipset, a Snapdragon 615. With this chipset comes a Quad-Core Cortex-A53 CPU clocked at 1.5 GHz, and also a Quad-Core Cortex-A53 CPU, clocked only at 1.0 GHz. This type of model also has a different GPU in the person of the Adreno 405.
The ALE-21/23 has the HiSilicon Kirin 620 chipset, wich comes with a Octa-core Cortex-A53 clocked at 1.2 GHz, and a Mali-450MP4 GPU.
You can see the compraison between the two models at GSMArena.
The kernel disables/enables the whole secondary CPU depending on the load of the system.
I'm using the PACPerformance application, and it shows all the 8 cores for me correctly, however im an owner of a ALE-L21 model.
With this app you should be able to disable cores indepdently too, which allows greater battery life for slightly decreased performance, but it's not working, because the built-in governor keeps turning back the offline cores.
I used this program on my previous S3 Mini, and worked great.
If you're interested, you can get the app from here: https://goo.gl/LCpxrA (the program stored in my Google Drive inside a ZIP archive)
Thanks for the explanation davidosa...
About the other guy, just shut up if you dont want to give an answer to my question only. Whats the problem on refering honor 4? Dont give a f&#? on what you say...
Has anyone been trying to compile kernel from source? I'm trying but I'm stuck at a weird error. I think huawei kernel sources for our phone miss vendor files, that instead are present in p8 kernel sources.
About the error: compilation process at some point doesn't find init/mounts.o. I don't know if this is related to vendor files.
Cheers
arviit said:
Has anyone been trying to compile kernel from source? I'm trying but I'm stuck at a weird error. I think huawei kernel sources for our phone miss vendor files, that instead are present in p8 kernel sources.
About the error: compilation process at some point doesn't find init/mounts.o. I don't know if this is related to vendor files.
Cheers
Click to expand...
Click to collapse
Would love to be doing that but unfortunately I dont know anything about code... Would love to know at this point... Huawei is the future... In less than 1 year huawei will be the next apple... believe me...
daviddosa said:
Rerikh's got the point, however, it should show up in the benchmarking apps as Offline.
Also there's a difference between versions of the phone.
There's one called ALE-L04, theres our ALE-L21, and I heard about ALE-L23 too.
The ALE-L04 has a different chipset, a Snapdragon 615. With this chipset comes a Quad-Core Cortex-A53 CPU clocked at 1.5 GHz, and also a Quad-Core Cortex-A53 CPU, clocked only at 1.0 GHz. This type of model also has a different GPU in the person of the Adreno 405.
The ALE-21/23 has the HiSilicon Kirin 620 chipset, wich comes with a Octa-core Cortex-A53 clocked at 1.2 GHz, and a Mali-450MP4 GPU.
You can see the compraison between the two models at GSMArena.
The kernel disables/enables the whole secondary CPU depending on the load of the system.
I'm using the PACPerformance application, and it shows all the 8 cores for me correctly, however im an owner of a ALE-L21 model.
With this app you should be able to disable cores indepdently too, which allows greater battery life for slightly decreased performance, but it's not working, because the built-in governor keeps turning back the offline cores.
I used this program on my previous S3 Mini, and worked great.
If you're interested, you can get the app from here: https://goo.gl/LCpxrA (the program stored in my Google Drive inside a ZIP archive)
Click to expand...
Click to collapse
The Ale L23 is the one that got shipped to México, I got mine with telcel, dont know if is the same for latinamerica
Well, this phone is running with only 4 cores... every CPU program in the market say it. With 100% load for more than 10 minutes the other cores never turn on that's it.
reindjura said:
Well, this phone is running with only 4 cores... every CPU program in the market say it. With 100% load for more than 10 minutes the other cores never turn on that's it.
Click to expand...
Click to collapse
You have to find CPU program for octa core, cpu stats for example.
http://forum.xda-developers.com/p8lite/help/cores-doesn-t-t3273248
It is a major problem of the apps to see all 8 cores!
siul_lasc said:
Thanks for the explanation davidosa...
About the other guy, just shut up if you dont want to give an answer to my question only. Whats the problem on refering honor 4? Dont give a f&#? on what you say...
Click to expand...
Click to collapse
Hahah XD, you are so funny, what do you think, why the forum is divided into topics? With same success I can ask in the topic of Nexus about Samsung, not the same, huh? Pls follow the rules next time
Time_Bandit said:
http://forum.xda-developers.com/p8lite/help/cores-doesn-t-t3273248
It is a major problem of the apps to see all 8 cores!
Click to expand...
Click to collapse
thanks good app thank you
download from playstore cpu stats you will see 8 cores
Hello, like you I have seen only 4 core working. It's like this because the core 4 to 7 didn't have governors and other's system files like the other's 0 to 3.
By this fact our device's have 8 cores but only 4 can run. It's really bad.
And it's only 4 real core.
dkonect said:
Hello, like you I have seen only 4 core working. It's like this because the core 4 to 7 didn't have governors and other's system files like the other's 0 to 3.
By this fact our device's have 8 cores but only 4 can run. It's really bad.
And it's only 4 real core.
Click to expand...
Click to collapse
No, just use appropriate app to see.
I saw all 8 cores working testing cpu stats + asphalt 8
dkonect said:
Hello, like you I have seen only 4 core working. It's like this because the core 4 to 7 didn't have governors and other's system files like the other's 0 to 3.
By this fact our device's have 8 cores but only 4 can run. It's really bad.
And it's only 4 real core.
Click to expand...
Click to collapse
All 8 cores running
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Octo cores
By chance one good guy's is working on a custom kernel for this exotic chipset.
In test stages now
For enable all the cores all time only need to edit one enter in the build prop.
dkonect said:
For enable all the cores all time
Click to expand...
Click to collapse
Why should work all 8 cores when they are not needed at all ?! This is absolute waste of energy!
When it comes to high load, all 8 cores are activated!
i got this phone a week ago, as i noticed, the 4-8 core depends on battery settings
normal is 8core
intelligent is 4core
i checked with a video+cpu stat, it lagged with 4core
All cores working here
Enviado de meu ALE-UL00 usando Tapatalk

Categories

Resources