How To Overclock New NVIDIA GPUs On Linux
Overclocking support for the GeForce 400/500/600/700 series graphics cards have been one of the missing Linux features compared to NVIDIA's Windows driver. Initially when the GeForce 400 series hardware launched we were told they didn't implement Linux overclocking support since the GPU clocking is much more complicated than earlier GPUs. With older NVIDIA GPUs, it's simply a matter of setting Option "CoolBits" "1" within the NVIDIA device section of the xorg.conf, rebooting, and then launching nvidia-settings to find GPU core and video memory sliders within a "Clock Frequencies" tab where you can easily manipulate the GPU core / vRAM frequencies. With the new overclocking code for modern GPUs, it's slightly different.
At first I tried the same approach as the CoolBits-based overclocking as done for older GeForce hardware on the NVIDIA Linux driver, but it didn't work. After reading the driver's documentation, they've changed the parameters for the Fermi+ overclocking. Now Option "Coolbits" "8" needs to be set within the device section of the xorg.conf to enable the new overclocking support. (If you also want to enable manual GPU fan controls, you need to set Option "Coolbits" "12". Per NVIDIA's driver documentation about the CoolBits value:
When "8" (Bit 3) is set in the "Coolbits" option value, the PowerMizer page in the nvidia-settings control panel will display a table that allows setting per-clock domain and per-performance level offsets to apply to clock values. This is allowed on certain GeForce GPUs in the GeForce GTX 400 series and later. Not all clock domains or performance levels may be modified.
When enabling the "Performance Level Editing", there's the usual NVIDIA disclaimer that needs to be accepted. Overclocking your GPU generally goes against the manufacturer warranty, can cause system instability, and potentially can damage your hardware.
Compared to the older CoolBits overclocking for GeForce 300 and older, the new CoolBits over/under-clocking is all based upon offsets rather than absolute values. With the old CoolBits were two simple sliders for setting the absolute values of the GPU and vRAM frequencies, but now it's a +/- difference off the current graphics/memory clocks. With a GeForce GTX 770 the GPU core could be underclocked by 105MHz or overclocked by up to a theoretical maximum of 1001MHz. The memory transfer rate could be reduced by up to 5390MHz or overclocked by up to 7010MHz. These are just the maximums based upon the driver/firmware and aren't necessarily achievable. There's no support for manually manipulating the GPU voltage.
For those not well experienced in overclocking, this new layout can be more confusing than the old design.
With the old CoolBits there was also an "auto detect" button that would try to find the highest stable maximum frequencies for your particular graphics card. Unfortunately, with this new CoolBits, there isn't an auto-detect option but you need to manually set the frequency offsets and then proceed to run 3D workloads/benchmarks and ensure stability and measure the performance improvements.
Overall though it's great to see NVIDIA finally add overclocking support to their Linux driver for modern GPUs. The addition of overclocking now goes to show NVIDIA paying more attention to Linux gamers and enthusiasts with being under pressure by Valve for SteamOS, Steam Machines, etc. I'm currently working on a NVIDIA Linux overclocking article looking at the performance for various Fermi, Kepler, and Maxwell GPUs on Ubuntu 14.04 LTS when overclocking. Stay tuned for those new NVIDIA Linux benchmarks from the 337.12 Beta driver in the next few days.