[Review WIP] Testing Threadripper 1950X & Intel Core i9-7900X - PC Hardware Portal Article Discussion

As mentioned in the "Behind the Scenes" update, we now have a CPU sample from AMD and Intel to do some high end desktop testing.
If there is something you want to have tested, please add it here. If we don't include it in the review we'll add that information here.
My plan of attack on both is as follows:
1) Stock CPU behavior. RAM will be set at XMP settings. Make sure this works as this is our baseline.
2) Overclock beahvior. The Threadripper 1950X will be tested using a different all-in-one cooler due to the size of the CPU.
I have a cooler that will already work with the 7900X - 240mm only though.
CPU tests (Phoronix Test Suite), Android build times are always tested.
We're adding building Android Studio from source, but I will do that last.
Still struggling to get temps monitored while doing these tests. I may research this further.

Is this review still coming, I'm interested in AOSP build times.

Related

overclocking

hi, is there any way to overclock the processor on the xperia?
By over clocking do you mean take the Xperia beyond its theoretical headline Limit of 528 mhz ?
I have looked at this but the new processor does not seem to be supported by many of the commercial solutions avaliable.
However SpeedBooster posted on the Turbo Speed X1 thread, allows you to set higher processor priority for the programs you choose, giving them a larger slice of the processor pie, its not overclocking as such, but it does make the processor work faster for the programs you choose.
I have used it for a few days with no serious problems, howvere having the processor running faster will drain battery faster, I have noticed this, but it is within acceptable limits for my needs,
I prefer the extra performance over the extra miliage.
Can you do a benchmark and post the results?
Would help a lot to start making a comparison between devices.
Is this what you're looking for:

A Working JIT compiler for the Epic is needed very badly

Hello,
The JIT compiler for the Epic on Froyo is just plain horrible. It is not optimized for the Samsung Epic at all. That is why quadrant scores were so low on 2.2, not because it was biased, but because the hummingbird processor is uable to utilize the JIT Compiler on froyo.If Quadrant really was biased, then how did the epic get the highest quadrant scores for 2.1?
When I had the evo, I was able to play flash videos easily and without low framerates. When I play them on the epic, I get very low framerates compared to the EVO, even when the hardware on the epic is better. I hope that this problem can he fixed soon.
Thanks
Flash videos work fine for me. Quadrant can be purposely inflated through different methods that's why its unreliable.
Sent from my Epic 4g.
ZOMG WITH THE BENCHMARKS!
Snapdragons have a cheat sheet of floating points. We don't...It makes them "LOOK" a lot faster with a benchmark.
Our 2.2 Just came out less than 23 hours ago. Chill dude. JIT for us works, it is just not as large of a boost as it is for other chipsets.
It depends on the benchmark. The Epic does better than the Evo in quadrants and neocore/3d graphic bench marks because the opengl already offers direct access to the phones gpu(hardware). Appearently the NDK allows arm7va opimizations for floating point instructions that the snap dragon obviously excels at ( source from google below). This has been discussed many times and I don't know much about the technicallity of any of it other than what I have read other people type.
Anyways I don't know flash works in android, but one would hope that it would be written with the NDK which allows them to work closer with the hardware. Who knows though adobe isn't known for making flash highly efficient.
The NDK provides:
•A set of tools and build files used to generate native code libraries from C and C++ sources
•A way to embed the corresponding native libraries into an application package file (.apk) that can be deployed on Android devices
•A set of native system headers and libraries that will be supported in all future versions of the Android platform, starting from Android 1.5. Applications that use native activities must be run on Android 2.3 or later.
•Documentation, samples, and tutorials
The latest release of the NDK supports these ARM instruction sets:
•ARMv5TE (including Thumb-1 instructions)
•ARMv7-A (including Thumb-2 and VFPv3-D16 instructions, with optional support for NEON/VFPv3-D32 instructions)
Future releases of the NDK will also support:
•x86 instructions (see CPU-ARCH-ABIS.HTML for more information)
ARMv5TE machine code will run on all ARM-based Android devices. ARMv7-A will run only on devices such as the Verizon Droid or Google Nexus One that have a compatible CPU. The main difference between the two instruction sets is that ARMv7-A supports hardware FPU, Thumb-2, and NEON instructions. You can target either or both of the instruction sets — ARMv5TE is the default, but switching to ARMv7-A is as easy as adding a single line to the application's Application.mk file, without needing to change anything else in the file. You can also build for both architectures at the same time and have everything stored in the final .apk. Complete information is provided in the CPU-ARCH-ABIS.HTML in the NDK package.
^from android.com

Dualcore processor processing

Hi,
I was wondering if the 2 CPU's are working simultaneously together? or I'st just 1?., I'm using FLEXREAPER X10 ICS 4.0.3 . Sometimes I get screen glitches .... when My tab is trying to sleep and I touched the screen. Also...when I try the benchmark it only say's the CPU1 processing speed... & etc. Also when I'm browsing in the Playstore the screen animation is a bit lag... Can some1 enlighten me...or is there an app for this? than can force 2 cpu to work all the time together.?
Yes, both cores are enabled at all times. But no, you cannot make an application use both cores unless the application was designed to do so.
FLEXREAPER X10 ICS 4.0.3 base a leak rom ICS, not a stable rom, so it has some problems.
Your benchmark is correct.
There are NOT 2 CPU's. There is only one CPU, with 2 cores. It doesn't process two applications at once, it CAN process two threads of the same application at the same time. Think of it as this: two CPUs would be two people writing on different pieces of paper.A single CPU with two cores would be one person writing with both hands at the same time. He can only write on the same piece of paper, but it's faster then it would be if he was writing with only one hand.
Note: this is not related to multi-task. Multi-tasking works based on processing a little bit of each app at a time, so altough it may seen that both are running at the same time, it is not.
Most apps are not designed to work with threads though, so there's your (actually, our) problem. But this is not an A500 problem, it applies to any multi-core processor based devices ou there (including desktops).
danc135 said:
There are NOT 2 CPU's. There is only one CPU, with 2 cores
Click to expand...
Click to collapse
Essentially true, but...
It doesn't process two applications at once
Click to expand...
Click to collapse
False. Two cores is just two CPUs on the same die.
Thanks for the response guys... I'm getting bit confused with this "multi-core processor".... I was expecting that it is fast to no lag, during browsing apps in my lib,switching application, even browsing in The PlAYSTORE". So It's correct to say that multi-core processor is a bit of a waste if an app can't use it's full/all cores potential? Also if the UI of an OS can't use all cores at the same time?
Dual Core, Dual CPU....
Not entirely, because if the kernel is capable of multi-threading, then it can use one core to run services while another is running the main application. The UI is only another application running on top of the kernel...
The only difference between a dual core Intel cpu and a dual core tegra 2 is the instruction set and basic capabilities, otherwise they can be thought of as essentially the same animal. The kernel, which is the core of the OS, handles the multi-tasking, but android has limited multi-tasking capabilities for Applications. Even so, services that run in the background are less of a hindrance on a dual core cpu than a single core one, and more and more applications are being written to take advantage of multiple cores.
Just have a bunch of widgets running on your UI, and you are looking at multi-tasking and multi-threading. Which are both better on multi-core processors.
A multiple core cpu are not more then one processor stacked on one die. They thread and load balance thru software.Applications MUST BE AWARE Of multi core cpus to take advantage of the dual cores.
A multiple Processor computer has a 3rd processor chip on the main board. this chip balances the load on hardware. this does not add over head on the processors. as on a Dual multi CORE CHIP. has a much higher load overhead.
SO Many people confuse the two. This is due to how the companies market the muticore cpu devices .
So a application that can not thread itself on a multi core chip will run in one of the cpu cores. a threaded app can well guess?
a dual Processor computer can run non multi thread aware app or program on two cores..
Its quite simply complicated..
You can throw all the hardware you want at a system. In the end, if the software sucks (not multi-threaded, poorly optimized, bad at resource management, etc...), it's still going to perform bad.
Dual core doesn't mean it can run one applicaton at twice speed, it means that it can run two applications at full speed, given that they're not threaded. Android's largely meant to run one application foregrounded, and since they can't magically make every application multi-threaded, you won't be seeing the benefits of multiple cores as much as you will on a more traditional platform.
Also, a dual-core tegra 2 is good, but only in comparison to other ARM processors (and even then, it's starting to show its age.) It's going to perform poorly compared to a full x86 computer, even one that's older.
netham45 said:
You can throw all the hardware you want at a system. In the end, if the software sucks (not multi-threaded, poorly optimized, bad at resource management, etc...), it's still going to perform bad.
Dual core doesn't mean it can run one applicaton at twice speed, it means that it can run two applications at full speed, given that they're not threaded. Android's largely meant to run one application foregrounded, and since they can't magically make every application multi-threaded, you won't be seeing the benefits of multiple cores as much as you will on a more traditional platform.
Also, a dual-core tegra 2 is good, but only in comparison to other ARM processors (and even then, it's starting to show its age.) It's going to perform poorly compared to a full x86 computer, even one that's older.
Click to expand...
Click to collapse
This is so true . With the exception of a TRUE Server dual OR Quad processor computer.. There is a special on board chip that will thread application calls to balance the load for non threaded programs and games..My first dual processor computer was a amd MP3000 back when dual cpu computers started to be within user price ranges. Most applications/programs did not multi thread .
And yes as far as computer speed and performance you will not gain any from this. but only will feel less lag when running several programs at once.a 2.8 ghz dual processor computer still runs at 2.8 not double that.
erica_renee said:
With the exception of a TRUE Server dual OR Quad processor computer.. There is a special on board chip that will thread application calls to balance the load for non threaded programs and games..
Click to expand...
Click to collapse
Actually this is incorrect. All such decisions are left to the OS's own scheduler, for multiple reasons: the CPU cannot know what kind of tasks it is to run, what should be given priority under which conditions and so on, like e.g. on a desktop PC interactive, user-oriented and in-focus applications and tasks are usually given more priority than background-tasks, whereas on a server one either gives all tasks similar priority or handles tasks priorities based on task-grouping. Not to mention realtime operating system which have entirely different requirements altogether.
If it was left to the CPU the performance gains would be terribly limited and could not be adjusted for different kinds of tasks and even operating systems.
(Not that anyone cares, I just thought to pop in and rant a little...)
Self correction
I said a multi-core processor only runs threads from the same process. That is wrong (thanks to my Computer Architecture professor for misleading me). It can run multiple threads from different processes, which would constitute true parallel processing. It's just better to stick with same process threads because of shared memory within the processor. Every core has its own cache memory (level 1 caches), and shared, on-die level 2 caches.
It all depends on the OS scheduler, really.
With ICS (and future Android versions), I hope the scheduler will improve to get the best of multi-core.
In the end though, it won't matter if applications aren't multi-thread (much harder to code). What I mean is, performance will be better, but not as better as it could be if developers used a lot of multi-threading.
To answer hatyrei's question, yes, it is a waste, in the sense that it has too much untapped potential.

Does linaro make a difference?

I notice some ROMs and kernels use linaro. I have tried them and others. I don't notice a difference in speed or battery. What is the advantage?
Sent from my Nexus 7 using XDA Premium HD app
The kernel sources compile faster. LOL
Hard to imagine that would be important to an end user.
There are probably some corner cases where code that is specifically crafted to take advantage of compiler features will execute more efficiently, but that's not the case when comparing compilation of identical sources by two different compilers.
It does on older phones like when I built Roms for the galaxy exhibit 1ghz one core 512mb ram phone, linaro literally doubled the speed but on the n7 Google has it pretty much fully optimised
Sent from my Nexus 4 @1.72 GHz on Stock 4.2.2
bftb0 said:
The kernel sources compile faster. LOL
Click to expand...
Click to collapse
For many codebases, moving to a newer version of gcc actually slows down the compilation process: http://gcc.gnu.org/ml/gcc/2012-02/msg00134.html
But switching to clang (where possible) sometimes helps.
Most compiler developers are focused heavily on producing optimal (and correct) output; compile time is a secondary consideration. It's relatively easy to write a compiler that runs fast but generates slow/bloated code. Good optimization requires a great deal of computation (and often RAM too).
There are probably some corner cases where code that is specifically crafted to take advantage of compiler features will execute more efficiently, but that's not the case when comparing compilation of identical sources by two different compilers.
Click to expand...
Click to collapse
Each new generation of gcc adds more techniques for optimizing existing code. You can see the effects when a standard benchmark is built by different compilers and run on the same system: http://www.phoronix.com/scan.php?page=article&item=gcc_42_47snapshot&num=3
As you can see, the changes are fairly subtle.
With respect to rebuilding Android using another compiler: you're more likely to notice a difference if your workload is heavily CPU-bound and if your current ROM was built by a much older compiler.
SW686 said:
Each new generation of gcc adds more techniques for optimizing existing code. You can see the effects when a standard benchmark is built by different compilers and run on the same system: http://www.phoronix.com/scan.php?page=article&item=gcc_42_47snapshot&num=3
As you can see, the changes are fairly subtle.
Click to expand...
Click to collapse
Yup. That was precisely my point - subtle to the point that they are only observable via careful benchmarking - but (despite claims to the contrary by enthusiastic folks on the internet) probably not discernible by users in a blind trial comparison without the aid of a stopwatch. Our raw perception of "how long something takes" simply is not accurate at the few-percentage-points level... and that's what the OP stated "I don't notice a difference".
Put another way, if a short one-second task becomes a 950 ms task I won't be able to notice the difference, or if a 60 second task becomes a 57-second task, I won't be able to notice that either (without a stopwatch). Both are 5% improvements.
Which is not to say that folks can't be interested in knowing they have a kernel or tweak that is 2% "better" than everybody else's - but they shouldn't over-sell the perceptibility of the actual gains involved.
I would like to see benchmark measurements of IRX120's claim; I have a hard time believing Samsung left a 100% performance gain "on the table" for a phone which was just released one month ago...
cheers
bftb0 said:
I would like to see benchmark measurements of IRX120's claim; I have a hard time believing Samsung left a 100% performance gain "on the table" for a phone which was just released one month ago...
Click to expand...
Click to collapse
To take a 50% performance hit due to the compiler, they would have to screw up something big, e.g. using a softfp toolchain on hardware that supports hard float[1]. Or accidentally building everything with -O0.
Even then, only the part of the workload using floating point would suffer, and that's nowhere near 100% for most operations. Maybe certain benchmarks.
So, as you said, most users probably wouldn't notice. These devices aren't exactly used for Bitcoin mining or computing Mersenne primes.
Also, ever since Froyo, Dalvik has implemented JIT to optimize hotspots. JIT code is typically generated by the VM, not by the native C compiler. This means that a large percentage of the cycles consumed by an application could be spent on instructions emitted by Dalvik directly, and not from anything originating in gcc.
And of course, applications that perform heavy computation often ship with their own native (binary) libraries. So switching to the Linaro toolchain is unlikely to have much of an impact on games or non-WebView browsers.
[1] http://www.memetic.org/raspbian-benchmarking-armel-vs-armhf/

Question (silly question) Can I use an older Android device's processor to add more processing power to the CPU?

I know the question contains a little of ignorance, but idk much about windows kernels and how works the OS en general, but, it is posible that a android phone with, idk, for example a snapdragon processor with an arch of ARM been used as more CPU processing power to the computer? Im just proposing it theoretically
And also by the way if someone could explain me what are the cores of the CPU and if it has anything related to the question thanks you
No. It will not work. Cores of the CPU are like brains in Humans, more cores = more processing power. Android uses the Linux kernel and Windows...the Windows kernel. Two differant beast. It would be like Cats and Dogs agreeing on the best place to go poo....it won't happen.
A CPU, or Central Processing Unit, is the part of the computer that does the actual work - performing operations. Modern CPUs have multiple cores, where each core is able to work on a different part of the operation. In a mobile context, multiple cores are also used to provide a balance between performance and power saving; depending on the CPU, there are generally 2 or more "little" cores that prioritize efficiency over performance; 2 or more "mid" cores that provide more processing power when the "little" cores aren't up to the task; and 1 or 2 "big" cores that provide the best performance but use the most power. When someone talks about "throttling" in a kernel, they're talking about the runtime mechanism that decides what cores a CPU will use under given load conditions.
There are multiple different CPU architectures, and as far as I know, it's not possible to parallel them - you can't use an ARM64 CPU in parallel with an Intel x64, even though they're both 64 bit. The reason for this is different architectures use different basic instructions and scheduling, so the amount of code that would need to go into a kernel to make different types work together would slow the system down and make the whole endeavor pointless, unless you're working with a really large scale operation.
If you look at multi-CPU systems, you'll see that everything from Xeon servers to supercomputers all use the same types of CPU to simplify interconnects, as well as the ability to use one kernel.
It's worth mentioning that there are some projects that do make use of different platforms - for example, SETI @ Home uses a network of Internet connected computers to create a sort of supercomputer. Botnets do the same sort of thing. The difference here is that these systems aren't paralleled, and they work at the application level, so they can only use a certain amount of the client system's resources.
V0latyle said:
A CPU, or Central Processing Unit, is the part of the computer that does the actual work - performing operations. Modern CPUs have multiple cores, where each core is able to work on a different part of the operation. In a mobile context, multiple cores are also used to provide a balance between performance and power saving; depending on the CPU, there are generally 2 or more "little" cores that prioritize efficiency over performance; 2 or more "mid" cores that provide more processing power when the "little" cores aren't up to the task; and 1 or 2 "big" cores that provide the best performance but use the most power. When someone talks about "throttling" in a kernel, they're talking about the runtime mechanism that decides what cores a CPU will use under given load conditions.
There are multiple different CPU architectures, and as far as I know, it's not possible to parallel them - you can't use an ARM64 CPU in parallel with an Intel x64, even though they're both 64 bit. The reason for this is different architectures use different basic instructions and scheduling, so the amount of code that would need to go into a kernel to make different types work together would slow the system down and make the whole endeavor pointless, unless you're working with a really large scale operation.
If you look at multi-CPU systems, you'll see that everything from Xeon servers to supercomputers all use the same types of CPU to simplify interconnects, as well as the ability to use one kernel.
It's worth mentioning that there are some projects that do make use of different platforms - for example, SETI @ Home uses a network of Internet connected computers to create a sort of supercomputer. Botnets do the same sort of thing. The difference here is that these systems aren't paralleled, and they work at the application level, so they can only use a certain amount of the client system's resources.
Click to expand...
Click to collapse
Whoa ok! Cool thanks for your explanation and time. I understood most of the reply so thanks for answering me question!
Have a good day
7zLT said:
Whoa ok! Cool thanks for your explanation and time. I understood most of the reply so thanks for answering me question!
Have a good day
Click to expand...
Click to collapse
No problem. Here is a Wiki article that may provide a more concise explanation. Turns out I was wrong about instruction sets, at least concerning AMD APUs.
The bottom line is...Yes, it's absolutely possible to use multiple different systems to provide more processing power than just one. But, unless those systems are specifically designed to work in parallel with other systems, it would be a bit more complicated to get everything to work together, and the end result wouldn't necessarily be faster. If you're enterprising enough, you could set up an application on your computer as well as your phone that uses your phone's CPU to perform operations, but it wouldn't be easy.
Oh!
Ok, thanks for the references
The Northbridge chipset has limited bandwidth and is optimized to work with specific cpu's. Integrating at this level be ineffective at best even if you could get it to work because of the Northbridge bandwidth limitations.
A dual processor board is the one that you wanted, originally used mostly for servers they are also used in high end workstations. Most games are designed to run on 4 cores so it may not yield much. Some 3D rendering softwares and such are designed to take advantage of dual processor mobos. Again designed to work with a specific processor family like the Xeon series ie matched processors.

Categories

Resources