Deep Learning & CUDA Benchmarks On The GeForce GTX 1080 Under Linux

Written by Michael Larabel in Graphics Cards on 11 June 2016 at 08:01 PM EDT. Page 1 of 3. 18 Comments.

Last week I published the first Linux review of the GeForce GTX 1080 followed by some performance-per-Watt and OpenGL results from the GTX 1080 going as far back as the 9800GTX, among other interesting follow-up tests with OpenGL/Vulkan/OpenCL. Since then one of the most popular requests has been for doing some deep learning benchmarks on the GTX 1080 along with some CUDA benchmarks, for those not relying upon OpenCL for open GPGPU computing. Here are some raw performance numbers as well as performance-per-Watt in the CUDA space.

This week I updated OpenBenchmarking.org's Caffe AlexNet test profile for some deep learning benchmarking on the GTX 1080. For the GTX 1080 and Maxwell comparison hardware I was using the CUDA 8.0 Release Candidate along with the updated cuDNN library. CUDA 8.0 RC1 was also used for the other CUDA compute benchmarks in this article in conjunction with the NVIDIA 367.18 beta Linux graphics driver. The other CUDA tests used were SHOC and cuda-mini-nbody.

For this weekend benchmarking, the GTX 1080 was compared to the GeForce GTX 960, GTX 970, GTX 980, GTX 980 Ti, and GTX TITAN X. In addition to the raw performance results were also some automated performance-per-Watt figures based upon the overall AC system power consumption as measured by a WattsUp Pro USB power meter that interfaces with our open-source Phoronix Test Suite benchmarking software.

Coming uo this next week will be my GeForce GTX 1070 results under Linux including these CUDA tests plus our wide assortment of other OpenGL/OpenCL/Vulkan benchmarks on a wider assortment of graphics processors running atop Ubuntu.


Related Articles