Maybe Intel Atom can get popular on smartphones, tablets and embedded devices anyways.
Originally Posted by sykobee
All the graphics on ARM are closed source. Then comes Intel with open source Ivy Bridge graphics on Intel Atom? Me like!
Very true - I think it will put some pressure onto ARM graphics vendors to provide open source drivers.
Originally Posted by uid313
Imagine if the next generation Raspberry Pi was Intel Atom based because of this graphics driver issue?
ARM are in a position to enable this as they control the Mali GPU. Or maybe one of the lesser known GPU makers could do the right thing. I'm assuming we'll never see open source Tegra drivers, and Imagination Tech have been terrible to date so I hold no hope there.
the clock for clock comparison is very interesting. the conventional "common wisdom" is that an ARM is less powerful than an equivalent x86 processor. this is why those quad-core phones/tablets are not faster than your laptop (with the exception that ARMs are usually bundled with dsp that can decode HD video, which is the only thing many users need high performance for). hence there is not all that much interest in running normal linux distros on arm tablets to replace laptops. These benchmarks go against that wisdom.
Its worth noting that (iirc) ARM and atom are both in-order chips (see https://en.wikipedia.org/wiki/Out-of-order_execution ). It takes a lot of silicon/power to do effective reordering but it gives a good performance boost. however if the compilers are getting better at ordering instructions in the binary, then there are big speed up to be had. This was the dream behind itanium.
This means that ARM (and atom) are a long way behind most current x86 processors per clock cycle. I'd be interested to see more ARM based netbooks, but the CPU is a fraction of the total power consumption on those, so there may not be huge gains in battery life for a similar performing machine.
I suggest some benchmarks of how arm performance depends on GCC version, maybe including the Linaro GCC.
ARM Cortex A8 is in-order, A9 is out-of-order: http://www.arm.com/products/processo.../cortex-a9.php
Originally Posted by ssam
Cortex-A9 has a limited form of out of order.
What I think could be interesting, but more for optimized applications (I'm thinking rendering) would be a comparison between an optimized ARM-NEON and Intel SSE3 (For Atom, unless newer ones support SSE4)
I ask because I saw someone compile LuxRender for ARM, and it ran terribly slow, with the reason being LuxRender has SSE2 Instructions in the main build, and the ARM version was running via C only...
A comparison between these could be useful if ARM were to in the future be used for render farms With the ability to stack many many cores in a box the size of my PC, it could be very energy efficient. However, if the comparable performance of NEON accelerated applications is still slower than a IB SSE<x> application, for things like rendering it might be worse off :/
My wish would to be to setup a cheap, many slave Linux render farm on 30 cheap ARM machines that uses as much power as my desktop, while rendering faster
Why doesn't AMD offer Radeon for ARM?
Because they sold what they had for the mobile market to Qualcomm a few years back, and that's now the Adreno (anagram of Radeon).
Originally Posted by uid313
NEON doesn't provide IEEE-compliant FP, only single precision FP ops are available (and don't support all rounding modes for instance). You'll have to wait for ARMv8 processors to get SP/DP IEEE-compliant FP.
Originally Posted by zeealpal
You can compare SSE vs NEON with FFmpeg by choosing a format that has been optimised for both ISA.
It should also be noted that I know someone working in a company doing post-production who told me they are using some nVidia farm that is based on ARM CPUs, they are probably using this: http://blogs.nvidia.com/2011/12/meet...velopment-kit/
* There are dual-core Atom, and quad-core (on most of the recent arm devices) arm, both could run at 1,5GHz, tegra is generally slower in cpu execution.
* Atom could run in mixed 64/32bit mode which allow to reduce code RAM footprint and ARM to use 16bit/32bit mode to reduce code RAM footprint with optimization and new mode in A9 that wasn't in A8. In both case compiler should not be optimized.
* Most ARM graphics driver are closed but fully OpenGL ES 2.0 compliant, (3.0 for mali T600 with cortex A15 or mixed cortex A15/A7 and OpenCL) and intel graphics driver is still buggy and doesn't work in lot of cases. But Intel make progress on Mesa (but no OpenCL at all) and serveral hack drivers (for ARM Mali, adreno, and now for most closed of all with nvidia : broadcomm), make progress too (nothing for nvidia tegra). But most of this tests are pure-CPU, no OpenCL or GPU.
So this test is not state of the art at all, and should never be, as when a new product is out, there is no driver, closed driver or incomplete driver.
The Intel valley chips that should (at least on the paper) really compete to today ARM SoC, will be out in two years, so a little bit late for the train.