NVIDIA GeForce RTX 30 Series OpenCL / CUDA / OptiX Compute + Rendering Benchmarks
Written by Michael Larabel in Graphics Cards on 15 April 2021. Page 6 of 6. 12 Comments

That's the quick look at the current RTX 30 line-up compared to the RTX 20 series in a wide variety of GPU compute/rendering workloads. Too bad AMD's ROCm Linux compute stack doesn't yet have full support for the RDNA/RDNA2 cards but once they do will be a larger comparison with those cards and compatible workloads. At least in the time being more of these benchmarks are seeing ROCm support and maturing.

23 distinct test profiles and 65 tests in total were run for this comparison across the 13 tested graphics cards. Here is the geometric mean of all that data:

From the RTX 2080 to RTX 3080 was a 70% jump (or 65% compared to the RTX 2080 SUPER), the RTX 2080 Ti to RTX 3090 was a 50% improvement, and from the RTX 2070 SUPER to RTX 3070 was a 33% improvement. Even at the lower end of the stack was still a 20% improvement from the RTX 2060 to RTX 3060 with the range of compute-focused benchmarks carried out.

Here is a look at the GPU power consumption for the entire duration of benchmarks carried out for this article.

Those wanting to see all 65 different tests carried out in full can head on over to this OpenBenchmarking.org result file. In this article I didn't include any performance-per-dollar metrics given the current mess in the marketplace finding the cards at reasonable prices, but via the OpenBenchmarking.org link above you can fill in the performance-per-dollar fields at the top of the page to dynamically generate your own pricing graphs based upon local/available pricing in your area. Those performance-per-dollar graphs will then appear as tabs with each of the performance graphs on that OpenBenchmarking.org result page.

Or if you want to evaluate an upgrade against your own system, with the Phoronix Test Suite installed simply run phoronix-test-suite benchmark 2104107-IB-GPUCOMPUT97 for your own fully-automated, side-by-side benchmark comparson. Toss in PERFORMANCE_PER_SENSOR=gpu.power as an environment variable if also wanting to see GPU power consumption and performance-per-Watt graphs for your hardware during testing.

Stay tuned for the interesting AMD Radeon vs. NVIDIA GeForce Linux gaming showdown next week on Phoronix and thanks again to NVIDIA for following through and getting the rest of the RTX 30 Ampere parts over for benchmarking.

If you enjoyed this article consider joining Phoronix Premium to view this site ad-free, multi-page articles on a single page, and other benefits. PayPal tips are also graciously accepted. Thanks for your support.


Related Articles
About The Author
Author picture

Michael Larabel is the principal author of Phoronix.com and founded the site in 2004 with a focus on enriching the Linux hardware experience. Michael has written more than 20,000 articles covering the state of Linux hardware support, Linux performance, graphics drivers, and other topics. Michael is also the lead developer of the Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org automated benchmarking software. He can be followed via Twitter or contacted via MichaelLarabel.com.

Trending Linux News