MIPS Technology fot 5X Greater Speed - Android Software Development

has anyone heard of this: Myriad’s Dalvik Turbo VM replaces the standard Android Dalvik engine, accelerating performance up to 5x on real-world Android™ applications running on MIPS-Based™ devices. It goes on to say, "With Dalvik Turbo VM, MIPS licensees can create SoCs with faster, more complex applications and richer game graphics optimized for Android smartphones and other high-performance consumer devices without requiring significant increases in device memory. The VM also provides substantial battery life improvements when running resource intensive tasks, all while retaining full compatibility with existing software. Myriad’s Dalvik Turbo VM is operational on all current versions of Android up to and including versions 2.1 (Éclair) and soon to be available for 2.2 (Froyo). Here's a link to an article. http://www.mobilegadgetnews.com/index.php?showtopic=33922 they further say, An evaluation version of the optimized VM will be available free-of-charge through the Android on MIPS community at www.mipsandroid.org as of August 1, 2010. For information on commercial distribution, contact Myriad Group at [email protected]. Sure sounds too amazing to be true. What's the catch? Can anyone comment please

Related

Samsung Orion Dual-Core 1GHz Chipset Revealed, Expects to Ship 10 Million Galaxy Tabs

In another round of press releases from the Korean technology company today, Samsung’s announcing the existence of their 1GHz dual-core chipset based on the ARM Cortex A9, being named by them the Orion. This is looking to be the official successor to the 1GHz single-core Hummingbird chipset (based on ARM Cortex A8) seen in their phones today as Samsung’s already expressed plans to introduce the successor to the Samsung Galaxy S smartphones sometime in 2011. I’d bet money that they’ll be equipped with these beasts.
What will the Orion bring, anyway? 1080p video decoding and encoding (playback and recording), an on-chip HDMI 1.3a interface, embedded GPS, and a triple display controller to work alongside that HDMI interface (meaning you could possibly use your phone while a video is playing in high definition through HDMI on your television).
It’s said that the Orion will deliver 5x the 3D performance over the previous generation from Samsung, but they didn’t go into specifics regarding the GPU they’ll be using. It’s also being designed on a 45nm low-power die, meaning battery life might not take a hit compared to the relatively weaker chipsets of today. The chipset should be shipping later this year to select manufacturers.
Samsung’s also expecting to ship 10 million Galaxy Tabs worldwide, according to the Wall Street Journal. That’s an ambitious goal up against the iPad, but who are we to say Samsung can’t meet it? They’re doing just as well as they said they would in the smartphone market with the Galaxy S, and while we can’t judge performance between two different markets, we won’t count them out at all. Read on for the full press details.
More HERE
Samsung Introduces High Performance, Low Power Dual CORTEXTM – A9 Application Processor for Mobile Devices
TAIPEI, Taiwan–(BUSINESS WIRE)–Samsung Electronics Co., Ltd., a world leader in advanced semiconductor solutions, today introduced its new 1GHz ARM® CORTEXTM A9-based dual-core application processor, codenamed Orion, for advanced mobile applications. Device OEM developers now have a powerful dual processor chip platform designed specifically to meet the needs of high-performance, low-power mobile applications including tablets, netbooks and smartphones. Samsung’s new processor will be demonstrated at the seventh annual Samsung Mobile Solutions Forum held here in Taiwan at the Westin Taipei Hotel.
“Consumers are demanding the full web experience without compromise while on the go,” said Dojun Rhee, vice president of Marketing, System LSI Division, Samsung Electronics. “Given this trend, mobile device designers need an application processor platform that delivers superb multimedia performance, fast CPU processing speed, and abundant memory bandwidth. Samsung’s newest dual core application processor chip is designed specifically to fulfill such stringent performance requirements while maintaining long battery life.”
Designed using Samsung’s 45 nanometer low-power process technology, Orion features a pair of 1GHz ARM Cortex A9 cores, each comes with a 32KB data cache and a 32KB instruction cache. Samsung also included a 1MB L2 cache to optimize CPU processing performance and provide fast context switching in a multi-tasking environment. In addition, the memory interface and bus architecture of Orion supports data intensive multimedia applications including full HD video playback and high speed 3D action games.
Samsung’s new application processor incorporates a rich portfolio of advanced multimedia features implemented by hardware accelerators, such as video encoder/decoder that supports 30fps video playback and recording at 1080P full HD resolution. Using an enhanced graphics processing unit (GPU), the new processors are capable of delivering 5 times the 3D graphics performance over the previous processor generation from Samsung.
For design flexibility and system BOM cost reduction, Orion integrates a set of interfaces commonly used in mobile devices to configure various peripheral functionalities. For example, with this processor, customers have the choice to use different types of storage including NAND flash, moviNANDTM, SSD or HDD providing both SATA, and eMMC interfaces. Customers can also choose their appropriate memory options including low power LPDDR2 or DDR3, which is commonly used for high performance. In addition, a global positioning system (GPS) receiver baseband processor is embedded in the processor to seamlessly support location based services (LBS), which is critical in many emerging mobile applications.
Orion features an onboard native triple display controller architecture that compliments multi-tasking operations in a multiple display environment. A mobile device using the Orion processor can simultaneously support two on-device display screens, while driving a third external display such as a TV or a monitor, via an on-chip HDMI 1.3a interface.
Orion is designed to support package-on-package (POP) with memory stacking to reduce the footprint. A derivative of Orion, which is housed in a standalone package with a 0.8mm ball pitch, is also available.
Samsung’s new dual-core application processor, Orion, will be available to select customers in the fourth quarter of 2010 and is scheduled for mass production in the first half of 2011.
Click to expand...
Click to collapse
more information
Good info, but I have never been a fan of Tabs, to me I can see their purpose and a big part of me sees them as a waist of money if I bought one. The battery life running that dual core processor is what I would like to see confirmed and not "assumed".
As much as I'd like one of these, I won't buy one until Samsung has real customer service and actually releases a GPS fix. We'll see what happens this month. Hopefully Samsung comes through so I can continue supporting them.
New processors generally come with more advanced power saving features, so the battery life might even be better
Good to see progress,
But is there really anything on Android market that utilises all that power?!
Theres scarcely any serious 3d games and not that much dev. work.
boodies said:
New processors generally come with more advanced power saving features, so the battery life might even be better
Click to expand...
Click to collapse
I think power should be the main focus, not more power, unless they can accomplish both.. then bring on the power...

A Working JIT compiler for the Epic is needed very badly

Hello,
The JIT compiler for the Epic on Froyo is just plain horrible. It is not optimized for the Samsung Epic at all. That is why quadrant scores were so low on 2.2, not because it was biased, but because the hummingbird processor is uable to utilize the JIT Compiler on froyo.If Quadrant really was biased, then how did the epic get the highest quadrant scores for 2.1?
When I had the evo, I was able to play flash videos easily and without low framerates. When I play them on the epic, I get very low framerates compared to the EVO, even when the hardware on the epic is better. I hope that this problem can he fixed soon.
Thanks
Flash videos work fine for me. Quadrant can be purposely inflated through different methods that's why its unreliable.
Sent from my Epic 4g.
ZOMG WITH THE BENCHMARKS!
Snapdragons have a cheat sheet of floating points. We don't...It makes them "LOOK" a lot faster with a benchmark.
Our 2.2 Just came out less than 23 hours ago. Chill dude. JIT for us works, it is just not as large of a boost as it is for other chipsets.
It depends on the benchmark. The Epic does better than the Evo in quadrants and neocore/3d graphic bench marks because the opengl already offers direct access to the phones gpu(hardware). Appearently the NDK allows arm7va opimizations for floating point instructions that the snap dragon obviously excels at ( source from google below). This has been discussed many times and I don't know much about the technicallity of any of it other than what I have read other people type.
Anyways I don't know flash works in android, but one would hope that it would be written with the NDK which allows them to work closer with the hardware. Who knows though adobe isn't known for making flash highly efficient.
The NDK provides:
•A set of tools and build files used to generate native code libraries from C and C++ sources
•A way to embed the corresponding native libraries into an application package file (.apk) that can be deployed on Android devices
•A set of native system headers and libraries that will be supported in all future versions of the Android platform, starting from Android 1.5. Applications that use native activities must be run on Android 2.3 or later.
•Documentation, samples, and tutorials
The latest release of the NDK supports these ARM instruction sets:
•ARMv5TE (including Thumb-1 instructions)
•ARMv7-A (including Thumb-2 and VFPv3-D16 instructions, with optional support for NEON/VFPv3-D32 instructions)
Future releases of the NDK will also support:
•x86 instructions (see CPU-ARCH-ABIS.HTML for more information)
ARMv5TE machine code will run on all ARM-based Android devices. ARMv7-A will run only on devices such as the Verizon Droid or Google Nexus One that have a compatible CPU. The main difference between the two instruction sets is that ARMv7-A supports hardware FPU, Thumb-2, and NEON instructions. You can target either or both of the instruction sets — ARMv5TE is the default, but switching to ARMv7-A is as easy as adding a single line to the application's Application.mk file, without needing to change anything else in the file. You can also build for both architectures at the same time and have everything stored in the final .apk. Complete information is provided in the CPU-ARCH-ABIS.HTML in the NDK package.
^from android.com

3D Mark and Windows 8

Hi all.
Although Windows 8 is a testing versión right now, 3D Mark runs pretty well.
Its even faster than W7. on my acer one 722 I got 2634 against 2577 of W7.
with 3dmark 05.
I did some Bios updates meanwhile, but ok, good result anyway!
I have the dev preview downloaded but never tried it. I might dual boot with win 7 and see what kind of improvements I get with it.
Futuremark Announces 3DMark for Windows 8
Windows 8 will bring with it a variety of changes—all of our Windows 8 coverage to date will give you the quick overview, but features such as the new Metro interface and expanded support for smartphones and tablets certainly raise a few questions. Those interested in benchmarking Windows 8 using 3DMark will be interested to hear that a new version of the benchmark will be coming for the OS, with the ability to compare performance across all devices a feature sets available for Windows 8.
Jukka Mäkinen, CEO of Futuremark, stated, "With Windows 8 gamers will be able to enjoy their games on a wide range of devices from lightweight tablets to heavy-duty desktop rigs. Faced with so much choice it will be hard to work out which devices offer the best value for money. Fortunately 3DMark for Windows 8 will be our most wide-reaching 3DMark ever, able to accurately measure and compare gaming performance across all devices and graphical feature sets available with Windows 8."
Tentatively titled 3DMark for Windows 8, the benchmark provides the following:
Measures and compares gaming performance on all Windows 8 devices
Stunning real-time scenes stress test all levels of hardware
Supports both x86 and ARM-based architectures
Can be used in both Metro UI and 'classic' Windows environments
Created in co-operation with the world’s leading technology companies
Currently in development, expected to be released in 2012
It's obviously too early to get specifics on the benchmark (and the above Futuremark-provided image very likely has nothing to do with 3DMark for Windows 8); however, it will at least be intersting to finally get a chance to compare hardware performance across more devices—including x86 platforms—in an apples-to-apples manner.
good nice thing

Does linaro make a difference?

I notice some ROMs and kernels use linaro. I have tried them and others. I don't notice a difference in speed or battery. What is the advantage?
Sent from my Nexus 7 using XDA Premium HD app
The kernel sources compile faster. LOL
Hard to imagine that would be important to an end user.
There are probably some corner cases where code that is specifically crafted to take advantage of compiler features will execute more efficiently, but that's not the case when comparing compilation of identical sources by two different compilers.
It does on older phones like when I built Roms for the galaxy exhibit 1ghz one core 512mb ram phone, linaro literally doubled the speed but on the n7 Google has it pretty much fully optimised
Sent from my Nexus 4 @1.72 GHz on Stock 4.2.2
bftb0 said:
The kernel sources compile faster. LOL
Click to expand...
Click to collapse
For many codebases, moving to a newer version of gcc actually slows down the compilation process: http://gcc.gnu.org/ml/gcc/2012-02/msg00134.html
But switching to clang (where possible) sometimes helps.
Most compiler developers are focused heavily on producing optimal (and correct) output; compile time is a secondary consideration. It's relatively easy to write a compiler that runs fast but generates slow/bloated code. Good optimization requires a great deal of computation (and often RAM too).
There are probably some corner cases where code that is specifically crafted to take advantage of compiler features will execute more efficiently, but that's not the case when comparing compilation of identical sources by two different compilers.
Click to expand...
Click to collapse
Each new generation of gcc adds more techniques for optimizing existing code. You can see the effects when a standard benchmark is built by different compilers and run on the same system: http://www.phoronix.com/scan.php?page=article&item=gcc_42_47snapshot&num=3
As you can see, the changes are fairly subtle.
With respect to rebuilding Android using another compiler: you're more likely to notice a difference if your workload is heavily CPU-bound and if your current ROM was built by a much older compiler.
SW686 said:
Each new generation of gcc adds more techniques for optimizing existing code. You can see the effects when a standard benchmark is built by different compilers and run on the same system: http://www.phoronix.com/scan.php?page=article&item=gcc_42_47snapshot&num=3
As you can see, the changes are fairly subtle.
Click to expand...
Click to collapse
Yup. That was precisely my point - subtle to the point that they are only observable via careful benchmarking - but (despite claims to the contrary by enthusiastic folks on the internet) probably not discernible by users in a blind trial comparison without the aid of a stopwatch. Our raw perception of "how long something takes" simply is not accurate at the few-percentage-points level... and that's what the OP stated "I don't notice a difference".
Put another way, if a short one-second task becomes a 950 ms task I won't be able to notice the difference, or if a 60 second task becomes a 57-second task, I won't be able to notice that either (without a stopwatch). Both are 5% improvements.
Which is not to say that folks can't be interested in knowing they have a kernel or tweak that is 2% "better" than everybody else's - but they shouldn't over-sell the perceptibility of the actual gains involved.
I would like to see benchmark measurements of IRX120's claim; I have a hard time believing Samsung left a 100% performance gain "on the table" for a phone which was just released one month ago...
cheers
bftb0 said:
I would like to see benchmark measurements of IRX120's claim; I have a hard time believing Samsung left a 100% performance gain "on the table" for a phone which was just released one month ago...
Click to expand...
Click to collapse
To take a 50% performance hit due to the compiler, they would have to screw up something big, e.g. using a softfp toolchain on hardware that supports hard float[1]. Or accidentally building everything with -O0.
Even then, only the part of the workload using floating point would suffer, and that's nowhere near 100% for most operations. Maybe certain benchmarks.
So, as you said, most users probably wouldn't notice. These devices aren't exactly used for Bitcoin mining or computing Mersenne primes.
Also, ever since Froyo, Dalvik has implemented JIT to optimize hotspots. JIT code is typically generated by the VM, not by the native C compiler. This means that a large percentage of the cycles consumed by an application could be spent on instructions emitted by Dalvik directly, and not from anything originating in gcc.
And of course, applications that perform heavy computation often ship with their own native (binary) libraries. So switching to the Linaro toolchain is unlikely to have much of an impact on games or non-WebView browsers.
[1] http://www.memetic.org/raspbian-benchmarking-armel-vs-armhf/

Android 4.4 "KitKat" offers hope for low-memory devices

With Project Svelte, the immediate successor of Project Butter that came with Jelly Bean with a similar aim, though far less concerned with the performance of truly low-end devices.
But exactly what is Project Svelte? Well, for starters, Google has decoupled the Android core from the so-called Google Experience, and it's made both of these lighter. Android's memory footprint has been slimmed down by removing unessential background services and, simultaneously, the memory consumption of features that you can't really live without has been reduced. Moreover, the wide array of Google services, such as YouTube and Chrome, have also undergone a similar treatment, and should now prove just as powerful, but more slender. Further still, core system processes will now protect system memory from apps far more jealously, especially if those consume large amounts of RAM. And last, but not least, Android will now launch multiple services sequentially, instead of at once, with the aim of trimming peak memory demands, thus improving stability.
Still on the topic of optimizations, it's worth pointing out that Google won't be approaching this rather complex issue on its own, isntead, it's enlisting the help of manufacturers and developers both. To do so, Google has introduced a number of tools that will help the next gen of devices take advantage of optimizations such as zRAM swapping, kernel samepage merging and the ability to tune the cache of the Dalvik JIT code. Other tools include a new API that will allow developers to make their apps really flexible, by letting them tweak or completely disable high memory features, depending on the specific device, and it's relative memory. Additionally, devs will be able to take advantage of the new procstats and meminfo tools, along with a more widely supported RenderScript Compute (GPU accelaration), which has also seen some performance gains with Android 4.4 KitKat.
Click to expand...
Click to collapse
http://www.phonearena.com/news/Andr...rtably-on-512MB-RAM-devices-heres-how_id49099
To summarize:
1-Android uses less memory because developers reduced its core memory footprint
2-Android uses less memory because services have been decoupled from the Core thus allowing for lighter "Android"
3-Services are no longer launched in parallel but sequentially thus allowing for less PEAK memory usage
4-Low level tools for developers allowing better handling of cache, RAM memory pages, etc.
Click to expand...
Click to collapse

Categories

Resources