NVIDIA GeForce RTX 2080 Ti Shows Very Strong Compute Performance Potential

Written by Michael Larabel in Graphics Cards on 21 September 2018. Page 1 of 8. 20 Comments

Besides the new GeForce RTX 2080 series being attractive for developers wanting to make use of new technologies like RTX/ray-tracing, mesh shaders, and DLSS (Deep Learning Super Sampling), CUDA and OpenCL benchmarking so far on the GeForce RTX 2080 Ti is yielding impressive performance -- even outside of the obvious AI / deep learning potential workloads with the Turing tensor cores. Here are some benchmarks looking at the OpenCL/CUDA performance on the high-end Maxwell, Pascal, and Turing cards as well as an AMD Radeon RX Vega 64 for reference. System power consumption, performance-per-Watt, and performance-per-dollar metrics also round out this latest Ubuntu Linux GPU compute comparison.

For this NVIDIA GeForce RTX 2080 Ti benchmarking today are various OpenCL and CUDA compute workloads. Used for this initial focus were some of the readily available tests via the Phoronix Test Suite and OpenBenchmarking.org. I have been working on updating the TensorFlow and Caffe tests along with some other new AI / deep learning benchmarks, but unfortunately not in time for today's comparison due to running into some library issues with CUDA 10. So stay tuned for more Turing CUDA/compute tests in the days ahead. Unfortunately so far I only have access to the RTX 2080 Ti so no benchmarks yet on the RTX 2080.

The GeForce GTX 980 Ti, GTX 1080 Ti, and RTX 2080 Ti were benchmarked with the NVIDIA 410.57 Linux driver and CUDA 10.0 as the main cards for this comparison to look at the generational difference and overall performance. On the AMD side in this mix was the Radeon RX Vega 64 being tested atop the newest ROCm 1.9 open-source compute stack while using the Linux 4.18 mainline kernel, which is also the same kernel revision used during the NVIDIA benchmarks for this article. This also happens to be my first time running benchmarks on the new ROCm 1.9.0 but will have more tests of this open-source compute stack with various Radeon GPUs shortly.

GeForce RTX 2080 Ti Linux GPU Compute Perf + Dollars

During the benchmarking process, the Phoronix Test Suite was monitoring the overall AC system power consumption using a WattsUp Pro power meter to also generate accurate performance-per-Watt statistics. The performance-per-dollar metrics are based on current pricing.

Related Articles