Announcement

Collapse
No announcement yet.

CompilerDeathMatch 64bit Final results

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Jimbo View Post
    Thanks again for the benchmarks. The graphs are prettier now.

    wow! I agree, GCC with custom flags rocks!!

    GCC apache (request per second)

    custom: 4117
    O2: 2478
    O3: 2385

    near 90% performance improvement ??

    GCC 7-ZIP (MIPS)

    custom: 2105
    O2: 1731
    O3: 1570

    In general, It seems that GCC performs very well, I have made a few tests by my own of GCC vs ICC and I have found similar results, the only cases where ICC seems to really outperform GCC is on multi-threaded applications like mplayer-mt or c-ray-mt.

    Indeed. On the other hand, I think an interesting take-home-message from these benchmarks has been that while using normal flags, there are serious alternatives with comparable performance (most noteably: Open64 and Clang).

    Comment


    • #12
      Originally posted by staalmannen View Post
      Indeed. On the other hand, I think an interesting take-home-message from these benchmarks has been that while using normal flags, there are serious alternatives with comparable performance (most noteably: Open64 and Clang).
      and that is with ext4! not the best filesystem for apache if you look at previous phoronix file system benchmarks.

      But seriously, change the flags send to open64 since it can perform a lot better then you see now in the custom benchmark. Else just put it on -o2 since that was giving great results anyway.

      Comment


      • #13
        Originally posted by markg85 View Post
        and that is with ext4! not the best filesystem for apache if you look at previous phoronix file system benchmarks.

        But seriously, change the flags send to open64 since it can perform a lot better then you see now in the custom benchmark. Else just put it on -o2 since that was giving great results anyway.
        If someone comes up with better custom flags, I am willing to try them. The same goes for Clang.
        I just took the flags suggested in the previous thread.

        Comment


        • #14
          The results of open64 with custom flags are weird, it performs much better with O2.

          Yeah! The normal O2 on open64 is indeed very impressive as you pointed, by a noticeable margin. Some distros devs should consider begin the use of open64, at least on CPU intensive applications.

          Comment


          • #15
            Originally posted by Jimbo View Post
            The results of open64 with custom flags are weird, it performs much better with O2.

            Yeah! The normal O2 on open64 is indeed very impressive as you pointed, by a noticeable margin. Some distros devs should consider begin the use of open64, at least on CPU intensive applications.
            If you got some other suggestions for optimizations I am all ears. If anyone got suggestions for Clang, that would be great too.

            Comment


            • #16
              I have a suggestion for aggregating results.

              For each benchmark, determine the average score across all compilers and flags. The for each compiler/flag, determine how much it deviates from the standard with (myScore-averageScore)/averageScore. For tests where less is better, multiply that by negative one. Then sum up these scores across tests and divide by total number of tests. (Time to compile and performance benchmarks should be done separately, I think).

              Basically, this would determine the average performance improvement (in percent) a particular compiler or set of flags would offer. A critic would be right to point out the limited utility of such an aggregation, but I still think it would be interesting to see.

              Comment


              • #17
                Originally posted by rexstuff View Post
                I have a suggestion for aggregating results.

                For each benchmark, determine the average score across all compilers and flags. The for each compiler/flag, determine how much it deviates from the standard with (myScore-averageScore)/averageScore. For tests where less is better, multiply that by negative one. Then sum up these scores across tests and divide by total number of tests. (Time to compile and performance benchmarks should be done separately, I think).

                Basically, this would determine the average performance improvement (in percent) a particular compiler or set of flags would offer. A critic would be right to point out the limited utility of such an aggregation, but I still think it would be interesting to see.
                I am not 100% sure that I understand exactly what you mean, but if I undestand you correctly, you want me to normalize each test and then put them together into some sort of "total performance" measure?

                The problem with this as far as I see is that different people will want to assign different weights to different tests - depending on their needs. In fact, the world is never nicely black-and-white but usually various shades of grey.

                Comment


                • #18
                  Originally posted by rexstuff View Post
                  I have a suggestion for aggregating results.

                  For each benchmark, determine the average score across all compilers and flags. The for each compiler/flag, determine how much it deviates from the standard with (myScore-averageScore)/averageScore. For tests where less is better, multiply that by negative one. Then sum up these scores across tests and divide by total number of tests. (Time to compile and performance benchmarks should be done separately, I think).

                  Basically, this would determine the average performance improvement (in percent) a particular compiler or set of flags would offer. A critic would be right to point out the limited utility of such an aggregation, but I still think it would be interesting to see.
                  Well..

                  With OpenBenchmarking.org (launching in just over a week), you can take the results and collapse them by either option class (-O2,-O1,-Os) or compiler (gcc, icc, etc). You can then select one compiler set (either aggregated or not) and normalize against that. You can also order by performance.

                  We'll be there in just over a week, stay tuned.

                  Comment


                  • #19
                    Fortran

                    @staalmannen:

                    Would you mind testing Fortran compilers as well? I think gfortran, g95, open64 and ifort would be the most interesting.

                    Thanks for your time!

                    Comment


                    • #20
                      Originally posted by HokTar View Post
                      @staalmannen:

                      Would you mind testing Fortran compilers as well? I think gfortran, g95, open64 and ifort would be the most interesting.

                      Thanks for your time!
                      The phoronix 3.0 compiler suite requires a fortran compiler too. Because of this I have packaged fort77 for the C-compilers that can not do fortran to do some comparisons.

                      I am currently playing with unusual compilers only available for 32-bit.

                      As we speak, I am running the KenCC (Plan9 C compiler, ancestor of the Golang compiler) for the pts/compilation suite
                      CC=/opt/plan9/bin/pcc CXX=$CC (to avoid another c++ compiler to give results, looking at cfront to get results here... but I have not found a wrapper like fort77 for cfront)
                      F77=fort77

                      Extremely surprising is that the 3 first tests out of 5 in pts/compilation fails, whereas MPlayer and Linux compilation gives values... (if it were true it would be really cool since a future glendix could be self-hosting with a Plan9 userspace but I doubt it).


                      One other problem I have right now: I can not register on openbenchmarking.org, the confirmation link only leads to the login page and login fails...

                      Comment

                      Working...
                      X