Last week NVIDIA unveiled the GeForce GTX TITAN X during their annual GPU Tech Conference. Of course, all of the major reviews at launch were under Windows and thus largely focused on the Direct3D performance. Now that our review sample arrived this week, I've spent the past few days hitting the TITAN X hard under Linux with various OpenGL and OpenCL workloads compared to other NVIDIA and AMD hardware on the binary Linux drivers.
The TITAN X is out of reach to most Phoronix readers given the $1000 USD price, but its performance is fantastic. The GeForce GTX TITAN X is home to 3072 CUDA cores and 24 streaming multi-processors and boasts a 1000MHz base clock speed and 1075MHz boost clock speed. There's 12GB of GDDR5 video memory running at 7.0 Gbps to allow for 336.5 GB/s of bandwidth. In comparison, the GeForce GTX 980 has just 16 streaming multi-processors and 2048 CUDA cores and a modest 4GB of GDDR5 memory.
All of the features are there that one would expect for a graphics card that costs a grand and based on NVIDIA's latest-generation Maxwell architecture. There's 4-way SLI support, G-SYNC, GPU Boost 2.0, OpenGL 4.5 / DirectX 12.1 compliance, and the other common features to the rest of the GeForce GTX 900 series line-up.
The GTX TITAN X can drive up to four displays at once and has a dual-link DVI connection, HDMI, and three DisplayPort 1.2 connections. Up to 5K displays can be driven by this graphics card.
The GeForce GTX TITAN X can consume up to 250 Watts of power under full load and thus NVIDIA recommends at least a 600 Watt power supply for the system. Beyond the power provided by the PCI Express 3.0 connection, there's a 6-pin and 8-pin power connection to feed this hungry GPU. Like the rest of the Maxwell line-up, the TITAN X GPU is manufactured on a 28nm process.