Announcement

Collapse
No announcement yet.

Compiler Benchmarks Of GCC, LLVM-GCC, DragonEgg, Clang

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • energyman
    replied
    so you have no evidence at all?

    anecdotal evidence does not count.

    Besides, which cpu manuifactured in the last 12 years does not have mmx?

    Leave a comment:


  • Ranguvar
    replied
    Originally posted by energyman View Post
    bold claims. Evidence?
    Years of encoding videos with AviSynth and x264, reading Doom9 forums, and sitting in #x264 and #x264-dev.

    If you want to do a benchmark with and without --disable-asm, be my guest. I already know what the answer will be, so I'm not too motivated to do so.

    The testcase would not be "completely detached from reality" -- as I said, their C is tuned as well, and used when a CPU does not have the features that their assembly needs (sometimes only MMX, sometimes SSE4, sometimes a whole different architecture than x86).

    Leave a comment:


  • energyman
    replied
    bold claims. Evidence?

    Leave a comment:


  • Ranguvar
    replied
    Originally posted by energyman View Post
    it would be more interessting compare that hand written asm with gcc generated code. Unless that is done there is no reason to turn off assembly just to create a testcase that is completely detached from reality.
    x264's handwritten asm absolutely destroys GCC's. It's not even close. At least half of all semi-recent x264 development has gone into the assembly, and the developers are well-known to trash GCC (and most other compiler) generated assembly. They tune their C as well, but no compiler could compare to what they've done in asm.

    Leave a comment:


  • ssam
    replied
    i tested gcc 4.3, 4.4 and 4.5 (shortly before its release) for a fortran code i use. -O3 beats -O2, and there is a trend of improvement between releases.
    http://www.hep.man.ac.uk/u/sam/zgoubi-optimise/

    Leave a comment:


  • XorEaxEax
    replied
    Originally posted by smitty3268 View Post
    I was also wondering if that was the case, those compile times just seem to out of whack otherwise.
    Well, it's obviously either alot of debug code and/or a massive regression. AFAIK this snapshot was the last one before the feature freeze so I guess anything could have been thrown in last minute

    Leave a comment:


  • smitty3268
    replied
    Originally posted by redi View Post
    Maybe I missed it, but I didn't see any mention of using --enable-checking=release or --disable-checking for the GCC 4.6 snapshot build.

    By default snapshots have lots of checks, which make compile time MUCH slower. Those checks are disabled for releases. That's presumably the equivalent of Clang's --disable-assertions

    If you didn't build GCC 4.6 without checking, that would definitely explain the slow compile times for 4.6
    I was also wondering if that was the case, those compile times just seem to out of whack otherwise.

    Leave a comment:


  • redi
    replied
    GCC 4.6 compile times

    Maybe I missed it, but I didn't see any mention of using --enable-checking=release or --disable-checking for the GCC 4.6 snapshot build.

    By default snapshots have lots of checks, which make compile time MUCH slower. Those checks are disabled for releases. That's presumably the equivalent of Clang's --disable-assertions

    If you didn't build GCC 4.6 without checking, that would definitely explain the slow compile times for 4.6

    Leave a comment:


  • Rob72
    replied
    Originally posted by Yezu View Post
    I agree with that, also some other proprietary compilers might be compared (IBM, HP, CodeWarrior).

    Also what about some ARM compiler benchmarks?
    I agree, that would be very interesting. The problem though is that they target different architectures.

    I have no experience with CodeWarrior, but they seem to target embedded platforms.

    I would add PathScale to the list, they have x86 compilers, and used to be our favourite in the past with AMD systems. But lately it is all Intel.

    Also the IBM compilers should be good, but not available for x86. So assuming you have access to a POWER or PowerPC machine, you can only compare it to GCC on the same machine.

    Leave a comment:


  • Shining Arcanine
    replied
    Originally posted by XorEaxEax View Post
    Well, in some tests -O3 loses to -O2, but very slightly. But this is a test from a year ago and I can't even find which version of Gcc was used, nor can I see if it was done on 32bit or 64bit. I test alot of packages routinely (Blender, p7zip, Handbrake, Dosbox, Mame etc) with -O2 and -O3 and O3 comes out on top.
    Usually -O3 will lose to -O2 when there are only a megabyte or two of L2 and L3 cache. If the L2 and L3 cache are say 128KB, then not only will -O3 lose to -O2, but -O2 will lose to -Os.

    Leave a comment:

Working...
X