Optimizing Mesa Performance With Compiler Flags
Compiler tuning can lead to performance improvements for many computational benchmarks by toying with the CFLAGS/CXXFLAGS, but is there much gain out of optimizing your Mesa build? Here's some benchmark results.
On Thursday for the Intel Mesa DRI driver was a new patch by Intel's Eric Anholt for attempting to default the driver compilation process to using -march=core2. This change is mainly to benefit 32-bit systems where SSE support can't be assumed by default, but with the i965 driver, more often than not it can be assumed an Intel Core 2 processor or newer is in use. (The older Intel processors are generally using the i915 driver.) By setting the -march=core2 flag, for i386 builds SSE would now be used for floating-point math and cmov instructions, plus other performance optimizations.
Eric noted that for 32-bit builds, the GLbenchmark offscreen performance went up by about 0.76% by defaulting to the Core2 optimizations. This patch was ultimately rejected since it turns out there's still some old Pentium 4s that could be found in an i965 driver configuration where things might break.
Curious to see what other performance changes could come from some basic compiler tuning, I did some Mesa benchmarks this weekend. This is on a high-end Ivy Bridge system and using x86_64 software, but was inspired by seeing Eric's patch earlier in the week. This is just some weekend benchmarking out of curiosity, although the drivers should be ideal where they are not CPU bottlenecked, but past testing has shown this isn't always the case for the Mesa/Gallium3D drivers.
By default, Mesa doesn't toy with the -march switch and it uses an -O2 optimization level. In this article, are benchmarks of -O1, -O2, -O3, and then -O3 while also passing -march=native. The -march=native will apply the CPU micro-architecture optimizations for the Intel Core i7 "Ivy Bridge" processor being used. GCC Link-Time Optimizations were also tested. While GCC 4.7.2 built out Mesa and glxinfo was still querying information correctly from the newly-built Mesa stack with LTO, when it came to running any OpenGL games/applications, they failed with the GCC LTO build.
The Intel HD 4000 graphics were then used with these different Mesa 9.1-devel builds for a variety of benchmarks from an Ubuntu 13.04 desktop paired with the Linux 3.8 kernel. Results in full, system logs, and other details are on OpenBenchmarking.org in the 1301279-FO-MESACOMPI30 result file.
Overall though, there wasn't much in the way of exciting performance changes... For the Intel HD 4000 graphics on Mesa 9.1-devel with the different CFLAGS/CXXFLAGS, the compile-time changes didn't yield much difference in the end for the OpenGL frame-rate.
Nexuiz was the only case where there was a significant difference as a result of the optimization level, but that is a difference of just a couple frames.
The Prey game, which is powered by id Tech 4, didn't even get any rise out of the Ivy Bridge optimizations or -O3 optimization level from GCC 4.7.2.
For those interested, more of these Mesa 9.1-devel benchmark results can be found on OpenBenchnarking.org.
Latest Linux Hardware Reviews
Latest Linux Articles
Latest Linux News
Latest Forum Discussions