Announcement

Collapse
No announcement yet.

GCC vs. LLVM Clang Is Mixed On The Ivy Bridge Extreme

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by XorEaxEax View Post
    I did a quick rundown test on C-Ray using GCC and Clang with and without -ffast-math:

    GCC version: 4.8.1 20130725
    Clang version: 3.3 (tags/RELEASE_33/final)
    Arch Linux 64-bit, core i5
    Benchmark: cat scene | ./c-ray-mt -t 4 -s 7500x3500 > foo.ppm

    results are in milliseconds, and is the average of 5 benchmark-runs (exluding a varm-up run)

    gcc -O3
    5840

    gcc -O3 -funroll-loops
    5704

    gcc -O3 -ffast-math -funroll-loops
    4374

    gcc -Ofast -funroll-loops
    4368

    gcc -Ofast -funroll-loops -march=native
    4351

    On GCC we can see that -ffast-math greatly improves the result, now let's look at Clang:

    clang -O3
    6403

    clang -O3 -funroll-loops
    6396

    clang -O3 -ffast-math -funroll-loops
    7137

    clang -Ofast -funroll-loops
    7122

    clang -Ofast -funroll-loops -march=native
    7153

    On Clang however, we see that -ffast-math _degrades_ performance markedly on C-Ray, so had Michael used it for his Phoronix C-Ray test then Clang would have come out looking MUCH worse than it does now since GCC got a great boost from -ffast-math.

    Apart from that it seems that -funroll-loops does nothing performance-wise on Clang, and same goes for -march=native.

    Interesting (and remarkable) that we get such a regression from -ffast-math. It'd be interesting (if it's not a hassle) to learn why.
    One possibility (which may or may not be the case) is that vectorization is at fault here. A big push in 3.3 was to ensure that the vectorization cost model was accurate, so that your vectorized code didn't make things worse by spending so much time just shuffling data. But the hope of having an accurate cost model doesn't mean that you ACTUALLY have one. It's possible that there's something severely broken in the cost model (at least for FP vectors) which is giving us these results.

    The unroll-loops does not surprise me. The LLVM guys probably believe they have good heuristics for when (or not) to do this and are likely correct.
    The architecture specific stuff may be linked to the inaccurate cost model issue? It would be amusing if we learned there was an off-by-one error or something in the micro-architecture specs table that drove the compiler!

    I guess 3.4 will be released in the next month or two, and it would be interesting to revisit this at that point.

    Comment


    • #17
      Originally posted by name99 View Post
      One possibility (which may or may not be the case) is that vectorization is at fault here.
      Yes it's quite obvious that something in the heuristics failed for this code atleast, there's no reason why less precise floating point math should result in much slower code.

      So it likely ties back to what you said earlier that Clang/LLVM will only try to vectorize with -ffast-math enabled, and that it is indeed the vectorization that fails.

      Originally posted by name99 View Post
      The architecture specific stuff may be linked to the inaccurate cost model issue?
      You mean -march=native ? I don't see how, both '-Ofast -funroll-loops' and '-Ofast -funroll-loops -march=native' where equally slow on Clang.

      On the other hand '-march=native' didn't seem to do anything for GCC either, I think results around 50 milliseconds or less can be discared as noise.

      It's funny though that Michael obviously knew about this problem with -ffast-math on Clang/LLVM, as the upstream (original) version of C-Ray 1.1 ships with '-O3 -ffast-math' in the makefile, and thus Michael modified the makefile to remove -ffast-math for his tests so as to make Clang/LLVM not look so bad. Good old agenda-biased Michael.

      This is why I take all 'tests' done here on Phoronix with a large grain of salt, particularly those between GCC and Clang/LLVM as I know he is extremely pro Clang/LLVM and has an agenda against FSF (which seems to spill over on GCC and other FSF/GNU software).

      Originally posted by name99 View Post
      I guess 3.4 will be released in the next month or two, and it would be interesting to revisit this at that point.
      Indeed, the competition between GCC and Clang/LLVM is a perfect situation for us as end users (and I think for the projects themselves). I use both, though GCC is definitely what I use for release binaries due to it's better optimizations, if this changes and Clang/LLVM delivers better code performance, I will use that toolchain for release builds.

      So here's looking forward to Clang 3.4 and GCC 4.9 and the advances they bring.

      Comment

      Working...
      X