Announcement

Collapse
No announcement yet.

GCC 8/9 vs. LLVM Clang 6/7 Compiler Benchmarks On AMD EPYC

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • GCC 8/9 vs. LLVM Clang 6/7 Compiler Benchmarks On AMD EPYC

    Phoronix: GCC 8/9 vs. LLVM Clang 6/7 Compiler Benchmarks On AMD EPYC

    Following the GCC 9.0 benchmarks earlier this week I ran some tests seeing how the GCC 8 stable compiler and GCC 9 development state compare to the LLVM Clang 6.0.1 stable compiler and LLVM Clang 7.0 development. Here are those benchmarks using the AMD EPYC 7601 32-core / 64-thread processor.

    http://www.phoronix.com/vr.php?view=26637

  • #2
    Does somebody know where and why Clang performs so much worse in some benchmarks?

    Has GCC a better vectorizer?

    Can you do also comparision on X86?

    Comment


    • #3
      ... LLVM Clang performance has come a long way over the past few years and is generally on-par with GCC these days ...
      Very true. Both compiler developments are getting closer to one another in terms of performance and GCCs dominance is no longer as obvious, which is promising.

      Comment


      • #4
        And yet we're only deploying GCC 7.3 on our epyc clusters ... would be interesting to put it next to these charts for comparison.
        It would be also interesting to get some icc numbers, but that's a whole new can of worms, icc being commercial etc.

        Also I was under the impression that LVM OpenMP implementation was donated by Intel and is considered to be the best out there. So my guess would be that all the phoronix test suite openmp tests need some tweaking to properly lay out the threads on the cores etc and achieve the best possible performance.

        Comment


        • #5
          Originally posted by sdack View Post
          Very true. Both compiler developments are getting closer to one another in terms of performance and GCCs dominance is no longer as obvious, which is promising.
          Um why? Isn't it good to have two quality compilers instead of just one (LLVM)?

          Comment


          • #6
            Again interesting results, we see that '-march=native' can have a real impact, particularly on GCC.

            Some unexpected things show up, '-march=native' actually result in lower performance on 'Redis GET' and 'Redis SADD' using GCC, and likewise for Clang 7.0 SVN on 'GET' as well.

            Another standout was Clang being ~10% slower with '-march=native' on 'LAME MP3' than without, meanwhile on 'FLAC', Clang with '-march=native' yielded the best result. Also in said FLAC test, GCC was slower with '-march=native' than without.

            Tests like these shine a light on how difficult it is to get the optimizations to perform optimally across different types of code, made even harder in how they may work perfectly in isolation, but cause unexpected results when combined.

            Comment


            • #7
              But an important subject is missing in the benchmark. What is the size of the binaries produced by LLVM and GCC?

              Comment


              • #8
                > With the FFTW benchmarks when using "-O3 -march=native" as the CFLAGS, the GCC performance came out much stronger than with the current LLVM Clang compiler.

                Bullshit.

                there was a 400 point difference in favor of Clang for the first benchmark, and a 300 point difference in favor of GCC for the second one.

                Stop shilling for gcc

                Comment


                • #9
                  After such huge amount of effort and resources poored into LLVM, judging only by the comments one would think CLang should run circles round GCC.

                  Realistically, both are asymptotically approaching hardware boundaries, with LLVM providing some wonderful tools and experiments. Good it happened, it brought competition, and now we have two great compiler infrastructures...

                  But llvm isn't a panacea and phenomenon that was proclaimed once in such a sensationalistic manner, on phoronix and elsewhere. It's fair to say it's just "Yet another (fine) compiler"...

                  Comment


                  • #10
                    Originally posted by clavko View Post
                    Realistically, both are asymptotically approaching hardware boundaries
                    That's not even close in many situations. They're asymptotically approaching the limits of what a compiler can optimize with static analysis, given the language rules and the code it is fed with. Profile guided optimization isn't that much different (and not many people run it on every compilation). It's not like it can do radical code transformations. Compilers work only on stuff they can prove. Even with profiling, it cannot prove that some things are just not possible and drop them from the code. It only assumes it's not, but it still needs to handle the case where they may happen, just in case they were never encountered when profiling etc.

                    Basically, if the input code is bad (which is 99% of HLL code out there, because people have this false sense of "np, compiler will optimize my shit!"), the output is most often just bad. Unless compilers get some artificial intelligence which can "understand" the code completely (in all cases) but that would be quite a feat: even human-based intelligence often ends up with code "refactoring" that may introduce different bugs and so on. A compiler cannot be allowed to do that. So it's limited in its optimization possibilities.

                    Alias analysis is one example. Very few people use the restrict keyword in C, because they think a compiler can "optimize their junk". No it won't, it cannot, since it cannot prove it without a deep intelligence (and even then it's dangerous as I mentioned). Some things the compiler just can't KNOW without info given to it. It will never be able to know, this is a logical fact not a limitation in the compiler.

                    See: https://cellperformance.beyond3d.com...t-keyword.html

                    Sadly there's many more situations other than just memory aliasing.

                    Originally posted by clavko View Post
                    But llvm isn't a panacea and phenomenon that was proclaimed once in such a sensationalistic manner, on phoronix and elsewhere.
                    On phoronix, usually you see sensationalizes and overhypes of anything that's "new" and since, well, it's a much younger compiler than GCC, you get the point.
                    Last edited by Weasel; 07-29-2018, 08:26 AM.

                    Comment

                    Working...
                    X