NVIDIA GeForce RTX 3080 Offers Up Incredible Linux GPU Compute Performance

Written by Michael Larabel in Graphics Cards on 6 October 2020 at 11:00 AM EDT. Page 1 of 9. 38 Comments.

Yesterday I finally received a GeForce RTX 3080 graphics card from NVIDIA for being able to deliver the first of our Linux benchmarks on the new RTX 30 Ampere series. What is immediately clear is the huge performance uplift for OpenCL and CUDA workloads with the RTX 3080 compared to its predecessors. The raw performance and even performance-per-dollar is staggering out of the GeForce RTX 3080 with the initial tests carried out on Ubuntu Linux. Linux gaming benchmarks will be out in the days ahead but for now is a look at the RTX 3080 compute performance across dozens of benchmarks and going as far back as the GeForce GTX 980 series for comparison.

For those not paying attention in recent weeks, the GeForce RTX 3080 is now available as NVIDIA's new offering coming in at the $699 USD price point. The GeForce RTX 3080 features 8,704 CUDA cores with a 1.44GHz base clock and 1.71GHz boost clock. The GeForce RTX 3080 has 10GB of GDDR6X memory on a 320-bit interface. Ampere brings big improvements to its ray-tracing and tensor cores while also now featuring PCI Express 4.0 connectivity, improved video encode/decode capabilities including AV1 decode, and an assortment of other architectural advancements with Ampere.

The GeForce RTX 3080 features a new 12-pin power connector or for existing power supplies an adapter is included for 2 x 8-pin PCI Express support. The RTX 3080 is quite power hungry with a rated graphics card power of 320 Watts.

Yesterday the RTX 3080 arrived courtesy of NVIDIA and was immediately loaded with Linux tests. Unfortunately we do not yet have an RTX 3090 or any ETA on when we may have that higher-tier card, but the RTX 3080 performance already is mighty impressive. At least in the weeks of waiting for a GeForce RTX 3080 review sample, I've already completed my re-testing of prior graphics cards and thus already have the initial batch of compute numbers to share today. As said earlier, the Linux gaming benchmarks will be out later this week as well as other RTX 30 Linux benchmarks to follow in the weeks ahead.

Due to many CUDA/NVIDIA API focused benchmarks, in this article is simply a look at the NVIDIA GPU compute performance without any Radeon metrics -- obviously in the forthcoming Linux gaming results will be the relevant Radeon figures. Particularly with Radeon Open eCosystem (ROCm) still not playing properly on the latest-generation Navi graphics cards, even for the OpenCL benchmarks in this article it wasn't possible to provide the comparison points on the RX 5700 series. In any case once the ROCm support is in better shape for Navi/RDNA will be such a compute comparison.

With the time waiting for the RTX 3080 also allowed re-testing NVIDIA GPUs going back to Maxwell. For today's testing the graphics cards benchmarked included:

- GTX 980
- GTX 980 Ti
- GTX TITAN X
- GTX 1060
- GTX 1070
- GTX 1070 Ti
- GTX 1080
- GTX 1650
- GTX 1650 SUPER
- GTX 1660
- GTX 1660 SUPER
- GTX 1660 Ti
- RTX 2060
- RTX 2060 SUPER
- RTX 2070
- RTX 2070 SUPER
- RTX 2080
- RTX 2080 SUPER
- RTX 2080 Ti
- TITAN RTX
- RTX 3080

The test system used was powered by an AMD Ryzen 9 3950X with ASUS ROG CROSSHAIR VIII HERO motherboard, 16GB of DDR4-3600 Corsair system memory, 2TB Corsair Force MP600 NVMe storage, and the various graphics cards under test. On the software side was Ubuntu 20.04.1 LTS with the Linux 5.4 kernel and P-State performance CPU frequency scaling governor.

The GeForce RTX 3080 was happily running along with the NVIDIA 455.23.05 beta Linux graphics driver and CUDA 11.1. During these GPU compute tests at least the current NVIDIA Linux driver was running well with the GeForce RTX 3080 and did not encounter any Linux GPU driver issues. Of course, any driver issues are more likely to come up during our gaming tests in the days ahead to stay tuned.

Tested workloads for this article via the Phoronix Test Suite included ArrayFire, Caffe, cl-mem, clpeak, FAHBench, FinanceBench, Geekbench, GROMACS, Leela Chess Zero, LuxCoreRender, Mixbench, NAMD, NCNN, PlaidML, ReakSR-NCNN, RedShift, and VkFFT. The Phoronix Test Suite was also monitoring the GPU power consumption and GPU core temperature during testing on per-test basis as well as in turn generating performance-per-Watt metrics.

Related Articles