Announcement

Collapse
No announcement yet.

Compiler Benchmarks Of GCC, LLVM-GCC, DragonEgg, Clang

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Ranguvar
    replied
    Originally posted by XorEaxEax View Post
    Speaking of x264, in order to really compare the differences between the compilers on this package you really should compile it without the hand-optimized assembly (which I'm assuming you haven't since the results are so similar between all versions of gcc).
    Very true. There's a ./configure parameter, --no-asm or something similar, that accomplishes that, and should be used in Phoronix testing. Michael, please use it when testing compilers. Nearly everything in x264 that can be optimized by using handwritten ASM has been, so you need to fall back to regular C in order to actually test the compiler on anything other than very basic code.

    Leave a comment:


  • energyman
    replied
    Originally posted by nanonyme View Post
    -mtune=native is redundant if you're using -march=native.
    -fomit-frame-pointer breaks debuggability in x86.
    -O3 has bugs and might slow down run-time in many cases.
    and since none of the systems is a debugging system, that is fine.
    O3 migh have bugs or not and might slow down things or make it faster. Depends on the software.

    Oh, and setting mtune after march is just stupid.

    Leave a comment:


  • energyman
    replied
    I hope Michael checked for every compilation that the right flags were used. Some parts of pts did not care about CFLAGS exports and compiled unoptimized crap on amd platforms.

    Leave a comment:


  • XorEaxEax
    replied
    Originally posted by yotambien View Post
    You have the performance benchmarks in the next page of that article.
    Well, in some tests -O3 loses to -O2, but very slightly. But this is a test from a year ago and I can't even find which version of Gcc was used, nor can I see if it was done on 32bit or 64bit. I test alot of packages routinely (Blender, p7zip, Handbrake, Dosbox, Mame etc) with -O2 and -O3 and O3 comes out on top.

    Leave a comment:


  • yotambien
    replied
    You have the performance benchmarks in the next page of that article.

    Leave a comment:


  • XorEaxEax
    replied
    Originally posted by yotambien View Post
    These benchmarks disagree:

    http://www.linux-mag.com/id/7574/2/
    You are confused, these are 'time to compile', not performance benchmarks. Obviously it will take longer time 'to compile' with more optimizations than with fewer. But the resulting binary should be atleast as fast or most likely faster.

    Leave a comment:


  • Yezu
    replied
    Originally posted by Drago View Post
    I wish, there was and latest intel C++ compiler benchmark alongside these.
    I agree with that, also some other proprietary compilers might be compared (IBM, HP, CodeWarrior).

    Also what about some ARM compiler benchmarks?

    Leave a comment:


  • XorEaxEax
    replied
    While these tests are great (kudos Phoronix!) it's unfortunate that they don't test some of the more advanced optimizations that has come during the later releases. While testing PGO (profile-guided optimization) would be a bit unfair since Clang/LLVM doesn't have this optimization, LTO (link time optimizations) exist in both compilers and would be an initeresting comparison. But I can understand that for practical reasons these more advanced optimizations have to be omitted. And since most people stick to -O3 I guess it's overall a fair comparison. Optimizations like PGO are mainly used by projects like Firefox, x264, emulators etc where the added performance really makes a difference.

    Speaking of x264, in order to really compare the differences between the compilers on this package you really should compile it without the hand-optimized assembly (which I'm assuming you haven't since the results are so similar between all versions of gcc).

    Leave a comment:


  • yotambien
    replied
    Originally posted by XorEaxEax View Post
    -O3 has been stable to compile with for ages, I can't recall having encountered any program that compiles with -O2 which has problems with -O3 in years. Also I haven't encountered any cases where -O3 is slower than -O2 in ages, so obviously these tests should be done with -O3, especially since that's where most of the new optimizations will end up.
    These benchmarks disagree:

    http://www.linux-mag.com/id/7574/2/

    Leave a comment:


  • XorEaxEax
    replied
    Originally posted by Drago View Post
    I wish, there was and latest intel C++ compiler benchmark alongside these.
    Yes that would be really interesting, sadly from my past experience it has alot compability problems.

    Leave a comment:

Working...
X