Announcement

Collapse
No announcement yet.

Benchmarking LLVM & Clang Against GCC 4.5

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Benchmarking LLVM & Clang Against GCC 4.5

    Phoronix: Benchmarking LLVM & Clang Against GCC 4.5

    With the recent release of GCC 4.5 and the forthcoming release of LLVM 2.7 that is expected in the coming days, we have decided to see how the performance of GCC compares to that of LLVM. For this testing we have results from GCC 4.3/4.4/4.5 compared to LLVM with its GCC front-end (LLVM-GCC) and against Clang, which is the native C/C++ compiler front-end for the Low-Level Virtual Machine.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Is this with or without Graphite, -flto, -fwhole-program?

    Comment


    • #3
      Originally posted by Smorg View Post
      Is this with or without Graphite, -flto, -fwhole-program?
      It's got to be without, or Michael would have mentioned it. I'm guessing he just typed make without altering any of the default options since he doesn't talk about that at all.

      Comment


      • #4
        As I mentioned in the "request for benchmarks" thread, I think it would be really cool to see how an entire system built from the ground up would compare between the two compilers. Since ClangBSD now is self-hosting, I think it should be possible to benchmark ClangBSD vs FreeBSD and get some rough idea how feasible a compiler change will be in the near future (FreeBSD 9 or later). Importantly, FreeBSD still use an old GCC since before the licence change from GPL2 to GPL3 due to a strict non-GPL3 policy. This might mean that LLVM/Clang is more competetive.

        Comment


        • #5
          staalmannen, I don't think it'd be as relevant to recompile the full system as you think it is. The only time you're going to have code running that isn't from the program in question is when there's overhead, system calls, etc. But the point of cpu-based benchmarks is to avoid that. [Citation Needed] :-)

          Comment


          • #6
            Michael, why did you use 4.x.0 verions when there have been many bug fixes since these releases?

            Comment


            • #7
              Originally posted by FireBurn View Post
              Michael, why did you use 4.x.0 verions when there have been many bug fixes since these releases?
              I would imagine most (all?) of the bug fixes would be a part of each next release; he can't very well do each minor version number, but using the initial releases can show overall improvements.

              Comment


              • #8
                Originally posted by mirv View Post
                I would imagine most (all?) of the bug fixes would be a part of each next release; he can't very well do each minor version number, but using the initial releases can show overall improvements.
                I wasn't meaning testing each minor release I meant use the latest point release for each major version ie 4.3.4, 4.4.3 and 4.5.0

                In the case of 4.3 that's a year and a half's work of bugfixes!

                Comment


                • #9
                  "point" taken (sorry, couldn't resist).
                  Still, I guess the reason was to use a baseline initial comparison, though now that I think of it, latest point releases might have been more useful - especially as Michael did mention somewhere that gcc 4.4.3 was used to compile all gcc versions.

                  On a side note, I wonder what will happen in future as developers code with various compilers in mind - this isn't usually something you would worry about, but it might well play out in some of the hand-coded optimisations.

                  Comment


                  • #10
                    Time to compile

                    I find it funny that the first benchmark result in these kinds of articles is always the 'time to compile'.

                    Sure it's nice if the compiler can build things faster, I personally care more about if the resulting binary performs better, and don't mind double compile times!

                    Then again, thinking of these openoffice emerges .. *shiver* but even there, if the app would perform 10% better, a double compile time would be acceptable to me. 3 hours or 6 hours don't really make a difference anymore anyway.

                    I remember compiling kernels on a 486SX which took 2 hours easy!

                    Comment

                    Working...
                    X