NVIDIA GeForce RTX 2080 Ti Shows Very Strong Compute Performance Potential

Written by Michael Larabel in Graphics Cards on 21 September 2018 at 11:55 AM EDT. Page 8 of 8. 20 Comments.
GeForce RTX 2080 Ti Linux GPU Compute Perf + Dollars

Here is a look at the AC system power consumption over the course of all the CUDA/OpenCL benchmarks ran for this testing. The Radeon RX Vega 64 was left out of this global look due to differences in the test selection around CUDA and the OpenCL tests where ROCm 1.9 was failing. The RTX 2080 Ti was consuming about 20 Watts more on average than the GTX 1080 Ti, but as the performance-per-Watt graphs showed, the Turing GPU still easily delivers better performance-per-Watt. The peak power consumption of this Intel Core i7 8086K system with RTX 2080 Ti Founder's Edition graphics card was 344 Watts.

GeForce RTX 2080 Ti Linux GPU Compute Perf + Dollars
GeForce RTX 2080 Ti Linux GPU Compute Perf + Dollars
GeForce RTX 2080 Ti Linux GPU Compute Perf + Dollars
GeForce RTX 2080 Ti Linux GPU Compute Perf + Dollars
GeForce RTX 2080 Ti Linux GPU Compute Perf + Dollars

While the $1199 USD price-tag of the GeForce RTX 2080 Ti Founder's Edition is prohibitive for many users, when looking at the OpenCL/CUDA performance-per-dollar it's not nearly as bad as looking at the gaming value... The $1199 cost is much more justifiable if you will constantly be loading the system with OpenCL/CUDA workloads, not to mention the value to developers for RTX/ray-tracing, DLSS, mesh shaders, and the Turing tensor cores.

GeForce RTX 2080 Ti Linux GPU Compute Perf + Dollars
GeForce RTX 2080 Ti Linux GPU Compute Perf + Dollars
GeForce RTX 2080 Ti Linux GPU Compute Perf + Dollars

In the OpenCL tests where the RX Vega 64 was playing nicely with the new ROCm 1.9 compute stack, that AMD GPU tended to deliver better performance-per-dollar. Unfortunately I don't have my hands yet on any GeForce RTX 2080 to see how that would compare in performance-per-dollar.

That's our initial look at the GeForce RTX 2080 Ti compute performance under Ubuntu Linux with CUDA 10.0 and the NVIDIA 410.57 driver. Stay tuned for more AI / deep learning benchmarks and other interesting Turing tests soon on Phoronix. If you appreciate all of my Linux benchmarking, consider showing your support by joining Phoronix Premium.

If you want to see how your own Linux GPU system performs against the hardware under test in this article, simply install the Phoronix Test Suite and run phoronix-test-suite benchmark 1809218-RA-GEFORCERT12 for your own side-by-side, fully-automated benchmarking comparison.

If you enjoyed this article consider joining Phoronix Premium to view this site ad-free, multi-page articles on a single page, and other benefits. PayPal or Stripe tips are also graciously accepted. Thanks for your support.


Related Articles
About The Author
Michael Larabel

Michael Larabel is the principal author of Phoronix.com and founded the site in 2004 with a focus on enriching the Linux hardware experience. Michael has written more than 20,000 articles covering the state of Linux hardware support, Linux performance, graphics drivers, and other topics. Michael is also the lead developer of the Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org automated benchmarking software. He can be followed via Twitter, LinkedIn, or contacted via MichaelLarabel.com.