Unless you want your graphics card to keep you warm this winter, here's a big comparison of AMD Radeon and NVIDIA GeForce graphics cards under Linux looking at their performance-per-Watt using the latest OpenGL Linux drivers as of the end of 2016. A few days back I posted a 31-way GeForce/Radeon Linux comparison looking at the raw performance with each company's latest Linux drivers going back to the Fermi and R700 days while for this article is looking at the system power consumption and power efficiency for this mass assortment of GPUs.
As some fun Christmas weekend benchmarks, using the latest Linux drivers are these new Linux gaming benchmark results for those concerned about the power efficiency of new GPUs or just curious how the efficiency has evolved going back to the Fermi and R700 days for this comparison. All tests were done on Ubuntu 16.04 LTS while the NVIDIA driver used was the 375.26 release and the AMD stack was with AMDGPU Linux 4.9 and Mesa 13.1-devel built against LLVM 4.0 SVN via the Padoka PPA.
The AMD Radeon cards I had for testing were the HD 4890, HD 5830, HD 6870, HD 6950, HD 7750, HD 7950, R9 270X, R9 285, R7 370, RX 460, RX 480, and R9 Fury. On the NVIDIA side was the GeForce GTX 460, GTX 650, GTX 680, GTX 760, GTX 780 Ti, GTX 950, GTX 960, GTX 970, GTX 980, GTX 980 Ti, GTX 1050, GTX 1050 Ti, GTX 1060, GTX 1070, and GTX 1080.
The AC system power consumption was monitored during testing with a WattsUp USB power meter. The Phoronix Test Suite automatically polled the power data to look at the system power usage and calculate perf-per-Watt while running all of our Linux OpenGL benchmarks in a fully-automated and reproducible manner.