Originally posted by mtippett
View Post
The point here is supplying optimization-related compiler flags is normal practice for gcc and it's very much required to get any sensibly-performing code. We could argue whether to use '-O2' or '-O3', but choosing the "standard configuration" is still a choice and it's by no means objective or anything, moreso since it's not even common practice for either packagers or package maintainers.
What if gcc people suddenly thought "hey let's add -fomg-awesome-optimization by default", which had benefits but meant code took 20x as long to compile? What if they made gcc create executables with huge amounts of debugging information? Would you have tested those configurations?
Now we can argue about defaults, but it's really common knowledge what you should and shouldn't use for building packages, and what you can expect from not passing any additional options. Quoting the manpage: "-O0 Reduce compilation time and make debugging produce the expected results. This is the default.". Furthermore, a compiler isn't stuff that targets the regular Joe as its audience.
I propose including measurements about the compiling time as well. That could dispel some concerns that some compiler option might be unfair (e.g. overly aggressive, brutish optimizations).
But apart from that, it's probably best to look around and ask packagers and applications maintainers what flags they recommend to be used on a global scale. You'll see the answer isn't far from the simple "-O2" in case of gcc. I'm sure something like that can be decided for the other compilers.
Comment