Announcement

Collapse
No announcement yet.

Compiler Benchmarks Of GCC, LLVM-GCC, DragonEgg, Clang

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by yotambien View Post
    Right. As I said, those benchmarks simply disagree with the idea that -O3 optimisations will always be at least as fast as -O2. To me, those numbers show that a) differences between -O2 and -O3 are minor; b) -O3 does not consistently produce the fastest binary. Of course, your experience is your experience, which is as valid as those tests.

    What sort of differences do you get with Mame and Handbrake (I guess you mean x264 in this case)?
    I don't know if I've kept any benchmark numbers for -O2 vs -O3 (I'll have to check), since I'm more interested in differences between -03 with or without explicit (non -Ox) optmizations like LTO and PGO etc. But since I was going to do some benchmarking on Mame soon anyways I'll make some -O2/O3 comparisons later this evening on it and post the results here. Just making dinner as we speak

    Comment


    • #32
      I suspect those tests where O2 outperformed O3 aren't very realistic. They probably have very small code bases that happen to fit into L1 with O2 and get enlarged a bit to only fit in the L2 cache with O3 optimizations, or something like that. Something that i imagine is mostly only true for microbenchmarks rather than a real application.

      Anyway, I think Michael isn't actually setting anything at all. If i remember correctly from the last compiler benchmarks he did, he's just running make without changing any of the default compiler settings from upstream.

      Comment


      • #33
        I think the compilers should be bootstapped for the compile-time benchmarks. It's not very realistic to compile everything with GCC4.4 system compiler, on a real system it would be using a self-built version that might (or might not) be able to compile programs faster.

        Comment


        • #34
          Ok here are the results:

          Test system: GCC 4.5.1, Arch Linux 2.6.35 64bit, Core i5
          Program: Mame 1.40
          mame commandline options: -noautoframeskip -frameskip 0 -skip_gameinfo -effect none -nowaitvsync -nothrottle -nosleep -window -mt -str 60

          -O2 -march=native -mfpmath=sse -msse4.2 -ffast-math
          cyber commando 209.14%
          cyber sled 123.52%
          radikal bikers 169.88%
          star gladiator 396.43%
          virtua fighter kids 185.24%

          -O3 -march=native -mfpmath=sse -msse4.2 -ffast-math
          cyber commando 213.44%
          cyber sled 124.71%
          radikal bikers 172.49%
          star gladiator 384.40%
          virtua fighter kids 187.20%

          Same as above (-O3 etc) but with PGO which automatically enables -fbranch-probabilities, -fvpt, -funroll-loops, -fpeel-loops, -ftracer.
          cyber commando 218.23%
          cyber sled 151.83%
          radikal bikers 186.45%
          star gladiator 406.21%
          virtua fighter kids 221.93%

          As much as I hate to admit it your (yotambien's) comment does have some credility in these results since even though -O2 only won in one test (thus an anomaly) it was the test with the biggest difference between -O2 and -O3.

          Other than that, PGO (profile guided optimization) shows that it can increase performance very nicely, I hope LLVM get's this optimization aswell soon. Next time I do a mame benchmark I will do a PGO test with -O2 aswell to see what the results are (particularly star gladiator). Also I will use a larger testcase which may show other instances where -O2 beats -O3.

          Comment


          • #35
            That's interesting. What are the percentages? I mean, I suppose higher is better, but what are they? : D

            On the other hand, the PGO thingy looks like it actually makes a nice difference...

            Comment


            • #36
              Originally posted by yotambien View Post
              That's interesting. What are the percentages? I mean, I suppose higher is better, but what are they? : D

              On the other hand, the PGO thingy looks like it actually makes a nice difference...
              Thanks for not rubbing it in ;D The percentages are relative to the game running in full speed (as in 100%), so in all these tests the emulated games run faster than what they should (-nothrottle makes it run as fast as it can). And yes PGO does make a difference in cpu intensive programs, the one standout here is Virtua Fighter Kids which only differs from the other games in that it's cpu emulation is done through a dynamic recompiler so obviously that benefits alot from some of the things PGO improves upon, like better branch prediction, loop unrolling, less cache trashing etc.

              Comment


              • #37
                The Cyber Sled results are impressive; System 21 is a beast. Which Core i5 model is that, and how are you clocking it?

                Comment


                • #38
                  Originally posted by Ex-Cyber View Post
                  The Cyber Sled results are impressive; System 21 is a beast. Which Core i5 model is that, and how are you clocking it?
                  Err.. how do I check model? cat /proc/cpuinfo only returns Core i5, no particular model as I can see. It's overclocked to 3.2ghz (original 2.67ghz).

                  Comment


                  • #39
                    Great article. IMHO more important than the benchmark results are the rather frequent occurrences where Clang/LLVM failed to compile something. There's a lot of talk out there how Clang/LLVM supposedly be better than GCC. Rather than some theoretical talk, this article brings some hard facts to the table: Clang/LLVM still fails miserably in what it's supposed to do, and where it does succeed the resulting binaries are often slower than GCC produced binaries.

                    Comment


                    • #40
                      Originally posted by smitty3268 View Post
                      I suspect those tests where O2 outperformed O3 aren't very realistic. They probably have very small code bases that happen to fit into L1 with O2 and get enlarged a bit to only fit in the L2 cache with O3 optimizations, or something like that. Something that i imagine is mostly only true for microbenchmarks rather than a real application.
                      Depends. Seems certain optimizations in Mesa drivers constituted of making structures smaller so they fit in caches. Caches are really significant in modern computing, hence why -Os is sometimes wicked fast even though it has even less optimizations meant for speed than -O2.

                      Comment


                      • #41
                        [QUOTE=XorEaxEax;155688]While these tests are great (kudos Phoronix!) it's unfortunate that they don't test some of the more advanced optimizations that has come during the later releases. While testing PGO (profile-guided optimization) would be a bit unfair since Clang/LLVM doesn't have this optimization.../QUOTE]

                        How would that be unfair? What's the point in comparing either compiler with anything less than it's strongest capabilities? If Clang/LLVM doesn't do PGO, that's their problem, nobody elses...

                        Comment


                        • #42
                          [QUOTE=Delgarde;155798]
                          Originally posted by XorEaxEax View Post
                          While these tests are great (kudos Phoronix!) it's unfortunate that they don't test some of the more advanced optimizations that has come during the later releases. While testing PGO (profile-guided optimization) would be a bit unfair since Clang/LLVM doesn't have this optimization.../QUOTE]

                          How would that be unfair? What's the point in comparing either compiler with anything less than it's strongest capabilities? If Clang/LLVM doesn't do PGO, that's their problem, nobody elses...
                          The issue with testing PGO is that you have to train the application, which can introduce all sorts of complications into testing. Ideally, the test framework itself would be able to script something but that's a lot of work.

                          Comment


                          • #43
                            Yes, the downside with PGO is that it's not just adding another flag and away we go. It needs to gather necessary data about how the program runs which means it's a two-stage process. First you compile it using -fprofile-generate which inserts alot of information-gathering code into your program, you then run the program and try to touch as much parts of the code as possible (not like going through every level in a game but rather to make sure different parts of the code are executed), once you exit from the compiled program it will dump all the gathered data into files which are then used in the second (final) stage of compilation (-fprofile-use). Here all the gathered data provides a plethora of information for the compiler to use when judging what/when and how to optimize.

                            From my experience PGO usually brings ~10-20% performance increase on cpu intensive code which is a real fine boon, but the two stage-compilation process makes it a non-trivial optimization to use. Hence it's most often applied on projects that really need all the performance they can get, encoders, compressors, emulators etc.

                            Comment


                            • #44
                              And like Smitty said, if you plan on routinely using you should likely make a script to automate it, I know projects like Firefox and x264 does this.

                              Comment


                              • #45
                                Originally posted by XorEaxEax View Post
                                Well, in some tests -O3 loses to -O2, but very slightly. But this is a test from a year ago and I can't even find which version of Gcc was used, nor can I see if it was done on 32bit or 64bit. I test alot of packages routinely (Blender, p7zip, Handbrake, Dosbox, Mame etc) with -O2 and -O3 and O3 comes out on top.
                                Usually -O3 will lose to -O2 when there are only a megabyte or two of L2 and L3 cache. If the L2 and L3 cache are say 128KB, then not only will -O3 lose to -O2, but -O2 will lose to -Os.

                                Comment

                                Working...
                                X