Testing NVIDIA's Linux Threaded OpenGL Optimizations
With the NVIDIA 310.14 Beta driver introduced at the beginning of this week there are some OpenGL performance improvements in general plus an experimental threaded OpenGL implementation that can be easily enabled. In this article are benchmarks from the NVIDIA GeForce GTX 680 with this new Linux driver release.
The 310.14 driver's release highlights explain the new OpenGL threaded optimizations as "Added experimental support for OpenGL threaded optimizations, available through the __GL_THREADED_OPTIMIZATIONS environment variable." The HTML documentation bundled with the driver binary goes on to explain:
"The NVIDIA OpenGL driver supports offloading its CPU computation to a worker thread. These optimizations typically benefit CPU-intensive applications, but might cause a decrease of performance in applications that heavily rely on synchronous OpenGL calls such as glGet*. Because of this, they are currently disabled by default.
Taking advantage of this experimental OpenGL support can be easily exposed in this case by setting LD_PRELOAD="libpthread.so.0 libGL.so.1" __GL_THREADED_OPTIMIZATIONS=1.
In this article are benchmarks comparing the NVIDIA 304.51 driver to the NVIDIA 310.14 driver when both were tested in their stock configuration and then when the 310.14 driver had the GL threaded optimizations enabled as mentioned above. Aside from this threaded optimization work, the 310.14 driver also supports OpenGL 4.3 and brings other features. As these results will show, at least for the NVIDIA GeForce GTX 680 "Kepler" graphics card, there's performance improvements outside of just enabling the threading optimizations.
All benchmarking was handled in a fully automated and reproducible way using the Phoronix Test Suite.
Latest Linux Hardware Reviews
Latest Linux Articles
Latest Linux News
Latest Forum Discussions