Yesterday I posted some benchmarks showing how the AMDGPU / R9 Fury performance has jumped up in the past few months just since the April release of Ubuntu 16.04 LTS. For those wondering how the open-source AMD OpenGL performance has evolved over the longer term, I took a Radeon R9 270X graphics card and re-did tests going back to Ubuntu 15.04 for looking at the RadeonSI Gallium3D performance for the past year and a half.
For this Friday Linux benchmarking fun was re-testing Ubuntu 15.04, 15.10, 16.04, and 16.04 + Mesa Git + Linux 4.8 for seeing how the performance has evolved on the same system going back to early 2015. The same Intel Xeon E3-1280 v5 Skylake system with Radeon R9 270X graphics card was used for all of the testing. The R9 270X was used since it had worked fine on the open-source driver stack going back to early 2015 unlike the late AMDGPU support with Tonga and FIji, or like in the case of the Radeon R9 290 where it's been in a recently regressed state on the open-source driver code. I couldn't go back any further in my testing as Ubuntu 14.10 was running into a DRM error from dpm_set_power_state with this graphics card on the 14.10 stock kernel.
Each Ubuntu release was tested out-of-the-box with its Linux kernel, Mesa, and X.Org Server from its time period. All of the system/hardware settings were maintained the same; Ubuntu 15.04 used the CPUFreq driver over P-State which is why its CPU frequency is reported differently. Following the Ubuntu 16.04 LTS tests, tests were done using the Linux 4.8 Git kernel along with Mesa Git + LLVM SVN via the Padoka PPA.
All of the OpenGL tests in this article were conducted in a fully-automated manner using the open-source Phoronix Test Suite. Due to only Ubuntu 16.04 getting OpenGL 4.1+ support, the tests were limited to OpenGL 3 compliant tests in order to run on the distributions going back to Ubuntu 15.04.