Page 1 of 2 12 LastLast
Results 1 to 10 of 18

Thread: Benchmarking LLVM & Clang Against GCC 4.5

  1. #1
    Join Date
    Jan 2007
    Posts
    14,829

    Default Benchmarking LLVM & Clang Against GCC 4.5

    Phoronix: Benchmarking LLVM & Clang Against GCC 4.5

    With the recent release of GCC 4.5 and the forthcoming release of LLVM 2.7 that is expected in the coming days, we have decided to see how the performance of GCC compares to that of LLVM. For this testing we have results from GCC 4.3/4.4/4.5 compared to LLVM with its GCC front-end (LLVM-GCC) and against Clang, which is the native C/C++ compiler front-end for the Low-Level Virtual Machine.

    http://www.phoronix.com/vr.php?view=14820

  2. #2
    Join Date
    Mar 2009
    Posts
    141

    Default

    Is this with or without Graphite, -flto, -fwhole-program?

  3. #3
    Join Date
    Oct 2008
    Posts
    3,137

    Default

    Quote Originally Posted by Smorg View Post
    Is this with or without Graphite, -flto, -fwhole-program?
    It's got to be without, or Michael would have mentioned it. I'm guessing he just typed make without altering any of the default options since he doesn't talk about that at all.

  4. #4
    Join Date
    Jan 2010
    Location
    Ghent
    Posts
    208

    Default

    As I mentioned in the "request for benchmarks" thread, I think it would be really cool to see how an entire system built from the ground up would compare between the two compilers. Since ClangBSD now is self-hosting, I think it should be possible to benchmark ClangBSD vs FreeBSD and get some rough idea how feasible a compiler change will be in the near future (FreeBSD 9 or later). Importantly, FreeBSD still use an old GCC since before the licence change from GPL2 to GPL3 due to a strict non-GPL3 policy. This might mean that LLVM/Clang is more competetive.

  5. #5
    Join Date
    Jan 2009
    Posts
    88

    Default

    staalmannen, I don't think it'd be as relevant to recompile the full system as you think it is. The only time you're going to have code running that isn't from the program in question is when there's overhead, system calls, etc. But the point of cpu-based benchmarks is to avoid that. [Citation Needed] :-)

  6. #6
    Join Date
    Dec 2007
    Location
    Edinburgh, Scotland
    Posts
    579

    Default

    Michael, why did you use 4.x.0 verions when there have been many bug fixes since these releases?

  7. #7
    Join Date
    Oct 2007
    Posts
    912

    Default

    Quote Originally Posted by FireBurn View Post
    Michael, why did you use 4.x.0 verions when there have been many bug fixes since these releases?
    I would imagine most (all?) of the bug fixes would be a part of each next release; he can't very well do each minor version number, but using the initial releases can show overall improvements.

  8. #8
    Join Date
    Dec 2007
    Location
    Edinburgh, Scotland
    Posts
    579

    Default

    Quote Originally Posted by mirv View Post
    I would imagine most (all?) of the bug fixes would be a part of each next release; he can't very well do each minor version number, but using the initial releases can show overall improvements.
    I wasn't meaning testing each minor release I meant use the latest point release for each major version ie 4.3.4, 4.4.3 and 4.5.0

    In the case of 4.3 that's a year and a half's work of bugfixes!

  9. #9
    Join Date
    Oct 2007
    Posts
    912

    Default

    "point" taken (sorry, couldn't resist).
    Still, I guess the reason was to use a baseline initial comparison, though now that I think of it, latest point releases might have been more useful - especially as Michael did mention somewhere that gcc 4.4.3 was used to compile all gcc versions.

    On a side note, I wonder what will happen in future as developers code with various compilers in mind - this isn't usually something you would worry about, but it might well play out in some of the hand-coded optimisations.

  10. #10
    Join Date
    Jan 2007
    Posts
    418

    Default Time to compile

    I find it funny that the first benchmark result in these kinds of articles is always the 'time to compile'.

    Sure it's nice if the compiler can build things faster, I personally care more about if the resulting binary performs better, and don't mind double compile times!

    Then again, thinking of these openoffice emerges .. *shiver* but even there, if the app would perform 10% better, a double compile time would be acceptable to me. 3 hours or 6 hours don't really make a difference anymore anyway.

    I remember compiling kernels on a 486SX which took 2 hours easy!

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •