Testing NVIDIA's Linux Threaded OpenGL Optimizations

Written by Michael Larabel in Display Drivers on 18 October 2012 at 01:30 AM EDT. Page 1 of 5. 49 Comments.

With the NVIDIA 310.14 Beta driver introduced at the beginning of this week there are some OpenGL performance improvements in general plus an experimental threaded OpenGL implementation that can be easily enabled. In this article are benchmarks from the NVIDIA GeForce GTX 680 with this new Linux driver release.

The 310.14 driver's release highlights explain the new OpenGL threaded optimizations as "Added experimental support for OpenGL threaded optimizations, available through the __GL_THREADED_OPTIMIZATIONS environment variable." The HTML documentation bundled with the driver binary goes on to explain:

"The NVIDIA OpenGL driver supports offloading its CPU computation to a worker thread. These optimizations typically benefit CPU-intensive applications, but might cause a decrease of performance in applications that heavily rely on synchronous OpenGL calls such as glGet*. Because of this, they are currently disabled by default.

Setting the __GL_THREADED_OPTIMIZATIONS environment variable to "1" before loading the NVIDIA OpenGL driver library will enable these optimizations for the lifetime of the application.

Please note that these optimizations can currently only be enabled if the target application dynamically links against pthreads. If this isn't the case, the dynamic loader can be instructed to do so at runtime by setting the LD_PRELOAD environment variable to include the pthreads library.

Additionally, these optimizations require Xlib to function in thread-safe mode. The NVIDIA OpenGL driver will automatically attempt to enable Xlib thread-safe mode if needed. However, it might not be possible in some situations, such as when the NVIDIA OpenGL driver library is dynamically loaded after Xlib has been loaded and initialized. If that is the case, threaded optimizations will stay disabled unless the application is modified to call XInitThreads() before initializing Xlib or to link directly against the NVIDIA OpenGL driver library. Alternatively, using the LD_PRELOAD environment variable to include the NVIDIA OpenGL driver library should also achieve the desired result."

Taking advantage of this experimental OpenGL support can be easily exposed in this case by setting LD_PRELOAD="libpthread.so.0 libGL.so.1" __GL_THREADED_OPTIMIZATIONS=1.

NVIDIA GeForce Linux Threaded OpenGL Optimizations

In this article are benchmarks comparing the NVIDIA 304.51 driver to the NVIDIA 310.14 driver when both were tested in their stock configuration and then when the 310.14 driver had the GL threaded optimizations enabled as mentioned above. Aside from this threaded optimization work, the 310.14 driver also supports OpenGL 4.3 and brings other features. As these results will show, at least for the NVIDIA GeForce GTX 680 "Kepler" graphics card, there's performance improvements outside of just enabling the threading optimizations.

All benchmarking was handled in a fully automated and reproducible way using the Phoronix Test Suite.


Related Articles