Originally posted by WorBlux
View Post
Announcement
Collapse
No announcement yet.
GCC vs. LLVM Clang Is Mixed On The Ivy Bridge Extreme
Collapse
X
-
Originally posted by name99 View PostLLVM (and so XCode) also has link time (ie whole program) optimization, enabled by -O4. I imagine GCC has the same.
Originally posted by name99 View PostApple slides showed that LTO made a substantial difference (5% to 20%) in performance, but of course that is against real world code that is split over a large number of files; it may have much less impact on these sorts of microbenchmarks.
Originally posted by name99 View PostWhat's not clear to me is the extent to which either LLVM or GCC have fully optimized their LTO pass. Apple had (PPC specific) tools fifteen years ago that could run whole program optimization and rearrange the function layout so that functions that called each other were packed together (and so took up less TLB coverage and shared overlapping cache lines).
Originally posted by name99 View Postbut could be run with a profiling pass to get a better understanding of the hot call chains. But as far as I know, the LLVM LTO does not (yet?) do this sort of thing, and I have zero idea about GCC.
I know there was a Google Summer of Code project to implement profile guided optimization into LLVM but I haven't heard anything about it since so I fear it didn't amount to anything.
Comment
-
Originally posted by name99 View PostOh, one thing to add to my earlier comment.
LLVM (and maybe GCC, but I don't know there) will not automatically vectorize many FP loops if fast-math is not enabled because getting the loop to vectorize requires re-ordering FP operations. This means that using fast-math, if your code allows it, can affect performance by quite a bit more than you might imagine.
GCC version: 4.8.1 20130725
Clang version: 3.3 (tags/RELEASE_33/final)
Arch Linux 64-bit, core i5
Benchmark: cat scene | ./c-ray-mt -t 4 -s 7500x3500 > foo.ppm
results are in milliseconds, and is the average of 5 benchmark-runs (exluding a varm-up run)
gcc -O3
5840
gcc -O3 -funroll-loops
5704
gcc -O3 -ffast-math -funroll-loops
4374
gcc -Ofast -funroll-loops
4368
gcc -Ofast -funroll-loops -march=native
4351
On GCC we can see that -ffast-math greatly improves the result, now let's look at Clang:
clang -O3
6403
clang -O3 -funroll-loops
6396
clang -O3 -ffast-math -funroll-loops
7137
clang -Ofast -funroll-loops
7122
clang -Ofast -funroll-loops -march=native
7153
On Clang however, we see that -ffast-math _degrades_ performance markedly on C-Ray, so had Michael used it for his Phoronix C-Ray test then Clang would have come out looking MUCH worse than it does now since GCC got a great boost from -ffast-math.
Apart from that it seems that -funroll-loops does nothing performance-wise on Clang, and same goes for -march=native.
Comment
-
Originally posted by curaga View PostJust noting, gcc accepts -O[any positive number], it's just that high numbers get clamped to 3. It's been this way for ages, gcc 4.2 accepts -O666 just fine.
Comment
-
Originally posted by XorEaxEax View PostI did a quick rundown test on C-Ray using GCC and Clang with and without -ffast-math:
GCC version: 4.8.1 20130725
Clang version: 3.3 (tags/RELEASE_33/final)
Arch Linux 64-bit, core i5
Benchmark: cat scene | ./c-ray-mt -t 4 -s 7500x3500 > foo.ppm
results are in milliseconds, and is the average of 5 benchmark-runs (exluding a varm-up run)
gcc -O3
5840
gcc -O3 -funroll-loops
5704
gcc -O3 -ffast-math -funroll-loops
4374
gcc -Ofast -funroll-loops
4368
gcc -Ofast -funroll-loops -march=native
4351
On GCC we can see that -ffast-math greatly improves the result, now let's look at Clang:
clang -O3
6403
clang -O3 -funroll-loops
6396
clang -O3 -ffast-math -funroll-loops
7137
clang -Ofast -funroll-loops
7122
clang -Ofast -funroll-loops -march=native
7153
On Clang however, we see that -ffast-math _degrades_ performance markedly on C-Ray, so had Michael used it for his Phoronix C-Ray test then Clang would have come out looking MUCH worse than it does now since GCC got a great boost from -ffast-math.
Apart from that it seems that -funroll-loops does nothing performance-wise on Clang, and same goes for -march=native.
Interesting (and remarkable) that we get such a regression from -ffast-math. It'd be interesting (if it's not a hassle) to learn why.
One possibility (which may or may not be the case) is that vectorization is at fault here. A big push in 3.3 was to ensure that the vectorization cost model was accurate, so that your vectorized code didn't make things worse by spending so much time just shuffling data. But the hope of having an accurate cost model doesn't mean that you ACTUALLY have one. It's possible that there's something severely broken in the cost model (at least for FP vectors) which is giving us these results.
The unroll-loops does not surprise me. The LLVM guys probably believe they have good heuristics for when (or not) to do this and are likely correct.
The architecture specific stuff may be linked to the inaccurate cost model issue? It would be amusing if we learned there was an off-by-one error or something in the micro-architecture specs table that drove the compiler!
I guess 3.4 will be released in the next month or two, and it would be interesting to revisit this at that point.
Comment
-
Originally posted by name99 View PostOne possibility (which may or may not be the case) is that vectorization is at fault here.
So it likely ties back to what you said earlier that Clang/LLVM will only try to vectorize with -ffast-math enabled, and that it is indeed the vectorization that fails.
Originally posted by name99 View PostThe architecture specific stuff may be linked to the inaccurate cost model issue?
On the other hand '-march=native' didn't seem to do anything for GCC either, I think results around 50 milliseconds or less can be discared as noise.
It's funny though that Michael obviously knew about this problem with -ffast-math on Clang/LLVM, as the upstream (original) version of C-Ray 1.1 ships with '-O3 -ffast-math' in the makefile, and thus Michael modified the makefile to remove -ffast-math for his tests so as to make Clang/LLVM not look so bad. Good old agenda-biased Michael.
This is why I take all 'tests' done here on Phoronix with a large grain of salt, particularly those between GCC and Clang/LLVM as I know he is extremely pro Clang/LLVM and has an agenda against FSF (which seems to spill over on GCC and other FSF/GNU software).
Originally posted by name99 View PostI guess 3.4 will be released in the next month or two, and it would be interesting to revisit this at that point.
So here's looking forward to Clang 3.4 and GCC 4.9 and the advances they bring.
Comment
Comment