GCC 4.7 Link-Time Optimization Performance
Phoronix: GCC 4.7 Link-Time Optimization Performance
With the recent interest regarding Link-Time Optimization support within the Linux kernel by GCC, here are some benchmarks of the latest stable release of GCC (v4.7.1) when benchmarking several common open-source projects with and without the performance-enhancing LTO compiler support.
The GCC developers always mention building Firefox with LTO now uses less ram, etc.
But, what differences are there with large and complicated software like Firefox?
PHP takes three times as long to compile? Someone made a bad assumption there, or else something's wrong.
But I thought the kernel generally had pretty low CPU usage anyway (I guess the BYTE benchmark being an exception). I'm unsure as to why big speed ups in general software usage would be expected from optimisations the compiler can do to the kernel.
some PGO benchmarks
Along with LTO, i am interested in seeing some PGO benchmarks.
Where did you get the kernel stuff from? The test is about using -lto compile flag on application performance (and compile time).
Originally Posted by Cyborg16
I wish the apache benchmark had more infomration. Although it may be good enough for relative performance gain it says very lilttle.
On one of my machines I got from 5000req/s to 26000req/s on 6KB file by using different options like keepalive and concurrecncy level.
Moreover there is lettle lto can do if apache uses dynamic modules. All the hooks and codepaths still need to be there in case a module needs them.
It would be more interesting to benchmark apache compiled with static-modules option.
Ah. Speed-reading and noticing the first link in the article. Thanks for pointing out my mistake!
Originally Posted by orome
PGO is trickier. for any given app that you want to compile with PGO then you must write a script that runs the program with some representative task, in order to make a profile. If the profile tasks are not representative you may end up optimising the wrong paths.
Originally Posted by mayankleoboy1
I could be wrong, but...
If you are going to be using LTO, I don't think you are suppose to care about how long it takes to compile. Of course it's going to take longer because you are optimizing the code on a global rather than local scale. If you use LTO, you are already saying you care more about the speed of the binary rather than how long it takes to compile. It is good to know the compile time hit that LTO takes, but I don't think it's really that relevant when using LTO. It is definitely a finished build option and not a debugging/develop option.
I'd be very interested in how the binary size changes when using LTO. While there is more in-lining, there may be some dead-code drop and other GCC magic. Just curious if you can say: "when using LTO, the binary size will in general (in/de)crease".
agreed. I am impressed that its only 3 times longer. if there are 100 source files then the optimiser has 100 times as much stuff to think about at once. I guess the real slow down is when that makes you hit swap.
Originally Posted by FourDMusic
Well that's of course only valid if you're building the source yourself. If this is built into pre-compiled binaries, compile time becomes very irrelevant to the user.
Originally Posted by ssam
Tags for this Thread