NVIDIA GeForce RTX 2060 Linux Performance From Gaming To TensorFlow & Compute
Yesterday NVIDIA kicked off their week at CES by announcing the GeForce RTX 2060, the lowest-cost Turing GPU to date at just $349 USD but aims to deliver around the performance of the previous-generation GeForce GTX 1080. I only received my RTX 2060 yesterday for testing but have been putting it through its paces since and have the initial benchmark results to deliver ranging from the OpenGL/Vulkan Linux gaming performance through various interesting GPU compute workloads. Also, with this testing there are graphics cards tested going back to the GeForce GTX 960 Maxwell for an interesting look at how the NVIDIA Linux GPU performance has evolved.
The GeForce RTX 2060 features 1920 CUDA cores, a 1365MHz base clock and 1680MHz boost clock speed, 6GB of GDDR6 video memory, and is rated for 37T RTX-OPS and 5 Giga-Rays/s. In comparison, the GeForce RTX 2070 Founder's Edition has 2304 CUDA cores, 1710MHz boost clock speed, and rated for 45T RTX-OPS and 6 Giga-Rays/s; but the RTX 2060 has a launch price of just $349 USD compared to $599 USD for the Founder's Edition model of the RTX 2070. The pricing of the RTX 2060 is certainly quite competitive and the best value we've seen out of the Turing hardware to date, though NVIDIA is also reportedly working on some new lower-end GTX/RTX graphics cards as well, but no announcements at this time.
The GeForce RTX 2060 has connections for DisplayPort, HDMI, DVI, and USB Type-C for the VirtualLink connection to be used with future VR headsets.
The GeForce RTX 2060 has a 160 Watt TDP and as such requires an 8-pin PCI Express power connector for sufficient power.
The appearance of the RTX 2060 Founder's Edition is quite similar to that of the other Turing RTX graphics cards. Thanks to NVIDIA for sending over this GeForce RTX 2060 for Linux testing.