Announcement

Collapse
No announcement yet.

Compiler Benchmarks Of GCC, LLVM-GCC, DragonEgg, Clang

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    I wish, there was and latest intel C++ compiler benchmark alongside these.

    Comment


    • #12
      The last sentence of the The John the Ripper test analysis:

      Originally posted by Phoronix
      DragonEgg and Clang both lagged behind in performance miserably compared to GCC.
      I think you meant LLVM-GCC and Clang. Because DragonEgg is basically competitive with the older versions of GCC, only falling marginally behind on the Opteron.

      Comment


      • #13
        Originally posted by Tgui View Post
        Good article, nice graphs and happy bday!

        I'd bet with time that LLVM using compilers will catch up to GCC in the rest of the tests.
        Possibly, but nether compiler is standing still in development so only the future will tell.

        Comment


        • #14
          Originally posted by nanonyme View Post
          -mtune=native is redundant if you're using -march=native.
          -fomit-frame-pointer breaks debuggability in x86.
          -O3 has bugs and might slow down run-time in many cases.
          -O3 has been stable to compile with for ages, I can't recall having encountered any program that compiles with -O2 which has problems with -O3 in years. Also I haven't encountered any cases where -O3 is slower than -O2 in ages, so obviously these tests should be done with -O3, especially since that's where most of the new optimizations will end up.

          Comment


          • #15
            Originally posted by Drago View Post
            I wish, there was and latest intel C++ compiler benchmark alongside these.
            Yes that would be really interesting, sadly from my past experience it has alot compability problems.

            Comment


            • #16
              Originally posted by XorEaxEax View Post
              -O3 has been stable to compile with for ages, I can't recall having encountered any program that compiles with -O2 which has problems with -O3 in years. Also I haven't encountered any cases where -O3 is slower than -O2 in ages, so obviously these tests should be done with -O3, especially since that's where most of the new optimizations will end up.
              These benchmarks disagree:

              Demo slot pragmatic play adalah salah satu fasilitas game slot yang sangat diminati oleh para member slot di indonesia..

              Comment


              • #17
                While these tests are great (kudos Phoronix!) it's unfortunate that they don't test some of the more advanced optimizations that has come during the later releases. While testing PGO (profile-guided optimization) would be a bit unfair since Clang/LLVM doesn't have this optimization, LTO (link time optimizations) exist in both compilers and would be an initeresting comparison. But I can understand that for practical reasons these more advanced optimizations have to be omitted. And since most people stick to -O3 I guess it's overall a fair comparison. Optimizations like PGO are mainly used by projects like Firefox, x264, emulators etc where the added performance really makes a difference.

                Speaking of x264, in order to really compare the differences between the compilers on this package you really should compile it without the hand-optimized assembly (which I'm assuming you haven't since the results are so similar between all versions of gcc).

                Comment


                • #18
                  Originally posted by Drago View Post
                  I wish, there was and latest intel C++ compiler benchmark alongside these.
                  I agree with that, also some other proprietary compilers might be compared (IBM, HP, CodeWarrior).

                  Also what about some ARM compiler benchmarks?

                  Comment


                  • #19
                    Originally posted by yotambien View Post
                    These benchmarks disagree:

                    http://www.linux-mag.com/id/7574/2/
                    You are confused, these are 'time to compile', not performance benchmarks. Obviously it will take longer time 'to compile' with more optimizations than with fewer. But the resulting binary should be atleast as fast or most likely faster.

                    Comment


                    • #20
                      You have the performance benchmarks in the next page of that article.

                      Comment

                      Working...
                      X