Announcement

Collapse
No announcement yet.

Benchmarks Of GCC 4.2 Through GCC 4.7 Compilers

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Benchmarks Of GCC 4.2 Through GCC 4.7 Compilers

    Phoronix: Benchmarks Of GCC 4.2 Through GCC 4.7 Compilers

    To see how the GCC 4.7 release is shaping up, for your viewing pleasure today are benchmarks of GCC 4.2 through a recent GCC 4.7 development snapshot. GCC 4.7 will be released next March/April with many significant changes, so here's some numbers to find out if you can expect to see any broad performance improvements. Making things more interesting, the benchmarks are being done from an AMD FX-8150 to allow you to see how the performance of this latest-generation AMD processor architecture is affected going back by GNU Compiler Collection releases long before this open-source compiler had any optimizations in place.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Meaningful tests for once (except maybe ffmpeg which uses assembler for it's critical code)

    It would be interesting to bisect down the regressions in 4.7 - I was under the impression we should be seeing speed ups with 4.7 not slow downs

    Can't wait to see the same benchmarks on your Sandybridge 2630QM

    You should really consider adding flattr to your articles

    Comment


    • #3
      measuring optimization potential

      This test basically measures the optimization potential that inner loops have. In other words it measures how bad the inner loop was written. If your inner loop heavily relies on the compiler figuring out how to best convert it to machine code, you should really work on it. I am looking at you, GraphicsMagic. Obviously encoder writers on the other hand have figured out this fact long ago and made their inner loop performance compiler agnostic.

      Comment


      • #4
        I'd rather have seen a comparison of how well the compilers optimize rather than how features of a particular new processor becoming supported over time affects performance. Graphite was added at some point in these versions. I haven't yet seen any tests on it.

        -march implies -mtune. No need for both.

        Comment


        • #5
          Since gcc 4.7 is not out yet, it's very likely performance and optimization are yet to be addressed. So this comparison may be a bit ahead of its time.

          Comment


          • #6
            I'm totally confused - is this a test of GCC compilers or GCC + LLVM backend?

            The table on the first page lists LLVM backend for all GCC versions so I've no idea what to think.

            Comment


            • #7
              -march=native was a great choice, but maybe -O2 would have been better then -O3, the gcc documentation recomends using -O2 as -O3 does some risky optimisations.

              Comment


              • #8
                Originally posted by bug77 View Post
                Since gcc 4.7 is not out yet, it's very likely performance and optimization are yet to be addressed. So this comparison may be a bit ahead of its time.
                No, it is very timely. Now the developers have a chance of addressing these issues.

                Comment


                • #9
                  Originally posted by oglueck View Post
                  This test basically measures the optimization potential that inner loops have. In other words it measures how bad the inner loop was written. If your inner loop heavily relies on the compiler figuring out how to best convert it to machine code, you should really work on it. I am looking at you, GraphicsMagic. Obviously encoder writers on the other hand have figured out this fact long ago and made their inner loop performance compiler agnostic.
                  Encoders optimize by writing those inner loops in assembly by hand and completely bypassing the compiler.

                  Comment


                  • #10
                    Originally posted by FireBurn View Post
                    Meaningful tests for once (except maybe ffmpeg which uses assembler for it's critical code)

                    It would be interesting to bisect down the regressions in 4.7 - I was under the impression we should be seeing speed ups with 4.7 not slow downs

                    Can't wait to see the same benchmarks on your Sandybridge 2630QM

                    You should really consider adding flattr to your articles
                    well if YOU consider an old 8.2 release (ffmpeg/avconv dev's still recommend you use the latest git version or at least a current 0.8.6) doing a virtually useless antiquated and tiny avi to vcd encode (who today even uses vcd , its all HD 1080P BR or at least 720P from HD MKV sources) anything like a reasonable test in 2011/12

                    or come to that, using an even older x264 v2010-11-22 without lots of current SIMD improvements or any AVX come to that ,to encode a non HD content on an AVX ready CPU is far beyond reason for a perceived improved speed test, when a 2 minute GIT pull is clearly needed to get the latest code to see large speed improvements.....

                    its not like the ffmpeg and x264 devs wont give you advice of suitable samples and command lines best suited for the GIT to use and integrate in your current phoenix test suite
                    Last edited by popper; 02 December 2011, 04:23 PM.

                    Comment

                    Working...
                    X