NVIDIA GeForce RTX 2080 Ti To GTX 980 Ti TensorFlow Benchmarks With ResNet-50, AlexNet, GoogLeNet, Inception, VGG-16

Written by Michael Larabel in Graphics Cards on 8 October 2018 at 08:00 AM EDT. Page 1 of 8. 32 Comments.

For those curious about the TensorFlow performance on the newly-released GeForce RTX 2080 series, for your viewing pleasure to kick off this week of Linux benchmarking is a look at Maxwell, Pascal, and Turing graphics cards in my possession when testing the NGC TensorFlow instance on CUDA 10.0 with the 410.57 Linux driver atop Ubuntu and exploring the performance of various models. Besides the raw performance, the performance-per-Watt and performance-per-dollar is also provided.

Apologies for getting these latest Turing Linux benchmarks out there due to benchmarking a lot of interesting hardware this month from the RTX 2080 Ti to a new dual EPYC server to some upcoming CPUs. The initial Vulkan/OpenGL Linux gaming tests on the RTX 2080 Ti turned out very strong and the OpenCL/CUDA performance has been really great for this mighty powerful but expensive ($1199+ USD) graphics card. The Windows vs. Linux performance is overall on par with expectations and the latest focus of the benchmarking has been TensorFlow with various models.

Due to time constraints, this initial article is just looking at the NVIDIA GPU performance while AMD reference points and CPU-based results may be coming in the days ahead. The higher-end graphics cards I had available for testing included in this comparison were the GeForce GTX 980 Ti, GTX 1060, GTX 1070, GTX 1070 Ti, GTX 1080, GTX 1080 Ti, and RTX 2080 Ti. Unfortunately I don't have an RTX 2080 nor the Titan Xp or any Volta cards, which is why they aren't included in this otherwise interesting comparison. For this benchmarking Tensorflow was used via NGC on Docker while having the NVIDIA 410.57 driver and CUDA 10.0 running on this Ubuntu 18.04.1 LTS stack. The TensorFlow models used were tested at FP16 to see the performance impact of the tensor cores on the new RTX 2080 Ti as well as FP32.

NGC TensorFlow NVIDIA GeForce RTX 2080 Ti

The AC system power consumption was monitored by the Phoronix Test Suite while benchmarking in conjunction with a WattsUp Pro power meter being polled over USB. Our benchmarking software was also monitoring the GPU core temperature and performance-per-dollar benchmarks were also generated based on the current NVIDIA retail pricing. This testing is straightforward so let's get right to these numbers.


Related Articles