LLVM 2.9 Clang Performance On Intel's Sandy Bridge
Phoronix: LLVM 2.9 Clang Performance On Intel's Sandy Bridge
Earlier this month benchmarks were published on Phoronix showing the GCC 4.6 compiler performance with AVX support under Intel's new Sandy Bridge processors that are the first to provide Advanced Vector Extensions support. The Core i5 2500K CPU performance is already great under Linux, but once more Linux software supports taking advantage of this latest cross-vendor instruction set, there will be even more speed-ups. While the Low-Level Virtual Machine does not yet have full support for taking advantage of the Advanced Vector Extensions support, in this article we are looking at how the latest development code for LLVM 2.9 and the Clang compiler are performing on Intel's Sandy Bridge in relation to GCC.
Could you specify exactly which GCC-4.6 snapshot you're using in your graphs?
It sound like you're using a final of GCC-4.6 which of course hasn't been released yet
Seeing as you always refer back to all your articles this could be confusing in the future
Exactly, you bothered to add 'svn' to clearly indicate that LLVM-Clang was not the final release but not for gcc 4.6/gcc 4.6 corei7-avx which then come across as if they were final releases which they aren't. Both Clang and Gcc are still having lots of regressions fixed before final release.
So while I find svn comparisons interesting they really should be marked as such, since they do not hold the same value as comparisons between full releases.
Michael, the graphs say something like "Seconds, Less Are better" or "Iterations per Minute, Higher is Better", but both of these phrases are wrong.
Both 'seconds' and 'iterations' specify a number and not an amount, so more/fewer should be used instead of higher/less. If you wondered why "Less Are better" sounded funny, this is why. You used 'are' since it matches with 'seconds', but 'less' isn't right.
Can you change PTS to say things like the following?
"Seconds, Fewer are better"
"Iterations per minute, More are better"
Since especially Clang benefits a lot from having an optimization level defined, in comparison to GCC (probably because it runs at a lower optimization by default), it would be interesting to see the same comparisons with CFLAGS="-O2". No strange voodo, just that change.
My educated guess is that LLVM/Clang will look far better in comparison to GCC in that situation.
Meaningless without opt flags
Most software compiled with extra flags, why bother to add "-O2"?
Michael, regarding you're x264 V2010-11-2 version that is very old now OC and you might consider compiling and using Daniel Kang's AVX patches just put into https://github.com/DarkShikari/x264-devel
, soon to be pushed to master , perhaps today.
that should also provide some nice improvements for the I7avx results
ohh and apparently Daniel is in need of someone on linux 32/64 to provide him a ssh connection to bench these and other assembly routines he's working on including ffmpeg patches at some point if anyone's up for giving him and other x264 devs linux sandy bridge i5. i7 access. see the #x264dev IRC channel to offer them a spot.
Why were LLVM 2.8 results omitted while GCC 4.5.2 was included? It is impossible to see how the LLVM performance has evolved relative to how GCC has evolved without them.
Originally Posted by phoronix
Pure nonsense, GCC uses -O0 as default, which is NO optimization. Also what makes you think that these tests are configured without any optimization???
Originally Posted by staalmannen
I've watched your own results which I must say are very/totally different from the 'official' phoronix tests aswell as my own (with the sole exception of open64 performance on c-ray) and I must say I'm very sceptical of those results.
Sorry, but your 'educated' guess leaves much to be desired it seems.
Originally Posted by staalmannen
Yes this would definately have been interesting data.
Originally Posted by Shining Arcanine
Tags for this Thread