Announcement

Collapse
No announcement yet.

The Performance Between GCC Optimization Levels

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Cyborg16
    replied
    Good benchmarks!

    Could you please include binary size too, if you do more like this? Also, I'm guessing -Ofast doesn't work everywhere ? hence no result in the PHP benchmark?

    Leave a comment:


  • alpha_one_x86
    replied
    Gcc vs llvm in Os, and mostly into compilation time can be very very interesting.

    Leave a comment:


  • bobwya
    replied
    Originally posted by mayankleoboy1 View Post
    Is it OK to use -O3 to build the linux kernel ?
    I doubt a combination of carefully written C and handcrafted assembler is going to benefit very much from additional pseudo-smart Compiler heuristics...

    Bob

    Leave a comment:


  • bobwya
    replied
    @Michael,

    Hmmm fairly interesting benchmarks - but a bit predicable in the outcomes... Although I had heard that Os only took a 10% hit...

    I was hoping that you would have tested more esoteric stuff like the so called "Graphite" optimisations ( -floop-interchange -ftree-loop-distribution -floop-strip-mine -floop-block ). Have always been too scared to try these on any applications on my Gentoo install (they are commented out in make.conf)

    I have used the lto (delayed link time optimisations) with gcc 4.7.1/2 - while I have a list of stuff that falls back to no-lto, it's not unmanageable. Naturally doesn't appear to make much difference with day-to-day usage

    Bob

    Leave a comment:


  • 4d4c47
    replied
    Originally posted by mayankleoboy1 View Post
    the GCC4.7 optimisation guide specifically says that using -O3 is not recommended over -O2. And that O3 was faster 'in the past' , but is now not faster than -O2.

    Is it OK to use -O3 to build the linux kernel ?
    scriptkernel-x.x.x.sh= BFS + BFQ + CFLAG -march=native -Ofast

    http://sourceforge.net/projects/scriptkernel/files/

    scriptgcc-4.7.2_UBUNTU12_64BITS.sh = script compile source code gcc-4.7.2 automatic then ubuntu 12.04+

    http://sourceforge.net/projects/scri...TS.sh/download


    ...

    Leave a comment:


  • 4d4c47
    replied
    Originally posted by mayankleoboy1 View Post
    the GCC4.7 optimisation guide specifically says that using -O3 is not recommended over -O2. And that O3 was faster 'in the past' , but is now not faster than -O2.

    Is it OK to use -O3 to build the linux kernel ?
    scriptkernel-x.x.x.sh= BFS + BFQ + CFLAG -Ofast

    http://sourceforge.net/projects/scriptkernel/files/

    scriptgcc-4.7.2_UBUNTU12_64BITS.sh = script compile source code gcc-4.7.2 automatic then ubuntu 12.04+

    http://sourceforge.net/projects/scri...TS.sh/download


    ...

    Leave a comment:


  • mayankleoboy1
    replied
    the GCC4.7 optimisation guide specifically says that using -O3 is not recommended over -O2. And that O3 was faster 'in the past' , but is now not faster than -O2.

    Is it OK to use -O3 to build the linux kernel ?

    Leave a comment:


  • ryao
    replied
    Originally posted by DaemonFC View Post
    -O2 and -O3 can actually produce massive binary output size increases for not much gain over -Os.

    When you compile Mozilla software with -O3, you will get much larger binary size, which actually can make it take longer to load, and make the resulting program take up more space in RAM. I think Mozilla recommends -O2, but I've seen where some distributions use -Os, which doesn't make the binaries much smaller, but can hurt Firefox's score on things like Sunspider or Google's V8 benchmark. (-O3 doesn't help it enough to be worth the cost in load times and additional RAM usage)

    Obviously some things benefit so much from -O3 that it becomes worth the tradeoff in longer load times and higher RAM consumption. You can't take that for granted, though.

    Yes, there is such a thing as being too aggressive with optimization level. Unfortunately, it's hard to always know when you've gone too far because it varies from program to program. Just use Fedora and be happy. They usually do OK with things like this.
    -O2 -march=native is generally considered to be optimal outside of special cases.

    Leave a comment:


  • DaemonFC
    replied
    -O2 and -O3 can actually produce massive binary output size increases for not much gain over -Os.

    When you compile Mozilla software with -O3, you will get much larger binary size, which actually can make it take longer to load, and make the resulting program take up more space in RAM. I think Mozilla recommends -O2, but I've seen where some distributions use -Os, which doesn't make the binaries much smaller, but can hurt Firefox's score on things like Sunspider or Google's V8 benchmark. (-O3 doesn't help it enough to be worth the cost in load times and additional RAM usage)

    Obviously some things benefit so much from -O3 that it becomes worth the tradeoff in longer load times and higher RAM consumption. You can't take that for granted, though.

    Yes, there is such a thing as being too aggressive with optimization level. Unfortunately, it's hard to always know when you've gone too far because it varies from program to program. Just use Fedora and be happy. They usually do OK with things like this.

    Leave a comment:


  • chithanh
    replied
    Indeed, some optimizations will work better in combination with -march settings. The cache issue will show in benchmarks with highly parallelized workloads (such as web serving or databases with many clients).

    Regarding the article, it is interesting how the selection of benchmarks emphasizes floating-point heavy code, since this is what benefits a lot from -O3 (and -Ofast, but this may cause calculations result to be different from what you expect).

    For some historic reference, Linux Magazine already ran a comparison of -Os, -O2 and -O3 on Gentoo vs. Ubuntu a while back: http://www.linux-mag.com/id/7574/

    Leave a comment:

Working...
X