Graphics Cards Archives
NVIDIA GeForce GTX 1660 SUPER Linux Gaming Performance

Last week NVIDIA announced the GeForce GTX 1660 SUPER as their newest Turing "SUPER" graphics card coming in at $229+ USD and delivering around 1.5x faster performance than the GeForce GTX 1060. For those wondering about the Linux gaming performance potential for this graphics card, here are our initial tests of this new graphics card using the EVGA GeForce GTX 1660 SUPER.

4 November 2019 - 9 Comments
Intel Icelake "Gen11" Graphics Are A Huge Upgrade Over Gen9 With Good Linux Support

Earlier this week I delivered our initial look at the Core i7-1065G7 Icelake Linux performance compared to Whiskey Lake and Kabylake-R. The CPU performance improvements and performance-per-Watt for this 10nm+ CPU is a big upgrade over those earlier notebooks while now here is our first look at how the Icelake "Gen11" graphics compare to those aging Gen9 graphics.

24 October 2019 - 18 Comments
NVIDIA RTX 2060 / 2070 / 2080 SUPER Linux Gaming Performance - 26 GPUs Benchmarked

We finally have our hands on NVIDIA's current RTX 20 SUPER graphics card line-up and have been putting the RTX 2060/2070/2080 SUPER cards through their paces under Linux. For the first of our long awaited NVIDIA RTX SUPER Linux benchmarks, first up is a look at the Linux gaming performance under a variety of native OpenGL/Vulkan games as well as Steam Play (DXVK+Proton) titles while testing a total of 26 graphics cards this round on the very latest AMD Radeon and NVIDIA GeForce drivers.

26 September 2019 - 28 Comments
AMD Radeon RX 5700 / RX 5700XT Linux Gaming Benchmarks

While last month we could talk all about the specifications for the Radeon RX 5700 series, today the embargo has lifted concerning the Radeon RX 5700/5700XT graphics cards so we can finally talk about the actual (Linux) performance. The road is a bit rougher than we had hoped, but it's possible to drive these new Navi graphics cards today using their open-source graphics driver stack at least for OpenGL games/applications. Over the weeks ahead, the Linux driver support for Navi will continue to improve.

7 July 2019 - 90 Comments
AMD Radeon VII Linux Performance vs. NVIDIA Gaming On Ubuntu For Q2'2019

It's been three months now since the AMD Radeon VII 7nm "Vega 20" graphics card was released and while we hopefully won't be waiting much longer for Navi to make its debut, for the time being this is the latest and great AMD Radeon consumer graphics card -- priced at around $700 USD. Here are some fresh benchmarks of the Radeon VII on Linux and compared to various high-end NVIDIA graphics cards while all testing happened from Ubuntu 19.04.

22 May 2019 - 40 Comments
GeForce GTX 650 vs. GTX 1650 Performance For Linux Gaming, Performance-Per-Watt

The latest in our benchmarking with the new GeForce GTX 1650 is some "fun" tests seeing how its performance compares to that of the GeForce GTX 650 Kepler. Various OpenGL and Vulkan Linux gaming tests were carried out as well as some compute tests and throughout monitoring the AC power consumption to yield the performance-per-Watt metrics.

15 May 2019 - 4 Comments
Radeon RX 560/570/580 vs. GeForce GTX 1060/1650/1660 Linux Gaming Performance

If you are looking to soon upgrade your graphics card for Linux gaming -- especially with the increasing number of titles running well under Steam Play -- but only have a budget of around $200 USD for the graphics card, this comparison is for you. In this article we're looking at the AMD Radeon RX 560 / RX 570 / RX 580 against the NVIDIA GeForce GTX 1060 / GTX 1650 / GTX 1660 graphics cards. Not only are we looking at the OpenGL/Vulkan Linux gaming performance both for native titles and Steam Play but also the GPU power consumption and performance-per-dollar metrics to help guide your next budget GPU purchasing decision.

6 May 2019 - 44 Comments
NVIDIA GeForce GTX 1650 Linux Gaming Performance & Benchmarks

This week NVIDIA introduced the $149 USD Turing-powered GTX 1650 graphics card. On launch day I picked up the ASUS GeForce GTX 1650 4GB Dual-Fan Edition (Dual-GTX1650-O4G) graphics card for Linux testing and have out now the initial GTX 1650 Linux performance benchmarks under Ubuntu compared to an assortment of lower-end and older AMD Radeon and NVIDIA GeForce graphics cards.

26 April 2019 - 19 Comments
NVIDIA GeForce GTX 1660 Linux Benchmarks

Last week NVIDIA announced the GeForce GTX 1660 as the newest RTX-less Turing GPU but costing only $219+ USD. The GTX 1660 is a further trimmed down version of the GeForce GTX 1660 Ti that launched several weeks prior. After picking up an ASUS GeForce GTX 1660 Phoenix Edition, here are Linux OpenGL/Vulkan gaming benchmarks compared to a wide assortment of AMD Radeon and NVIDIA GeForce graphics cards under Ubuntu.

20 March 2019 - 18 Comments
NVIDIA GeForce GTX 1660 Ti OpenCL Benchmarks, 14-Way NVIDIA/AMD GPU Compute Tests

On Monday we published the initial GeForce GTX 1660 Ti Linux benchmarks focused on gaming but due to having only a limited amount of time with that new Turing GPU at the time, CUDA/OpenCL benchmarks were yet to be completed. Our initial GPU compute tests with that "TU116" graphics card is now complete and we have those Ubuntu Linux benchmark results for sharing.

1 March 2019 - 13 Comments
NVIDIA GeForce GTX 1660 Ti Linux Gaming Benchmarks

Last week NVIDIA unveiled the GeForce GTX 1660 Ti as their first Turing graphics card shipping without the RTX/tensor cores enabled and that allowing the company to introduce their first sub-$300 graphics card of this new generation. I bought an EVGA GeForce GTX 1660 Ti XC Black graphics card for delivering Linux OpenGL/Vulkan gaming benchmarks of this TU116 GPU and have the initial results to share today compared to a total of 16 different NVIDIA GeForce / AMD Radeon graphics cards on the latest Linux graphics drivers.

25 February 2019 - 73 Comments
AMD Radeon VII Linux Benchmarks - Powerful Open-Source Graphics For Compute & Gaming

Today we can finally reveal the Linux performance details for the AMD Radeon VII graphics card... Especially if you are an open-source driver fan, it's quite a treat thanks to having fully open-source and fairly mature driver support, but can this $699 USD graphics card dance with the likes of the GeForce RTX 2080? Here is our initial look at the Radeon VII performance on Linux using fifteen different AMD Radeon and NVIDIA GeForce graphics cards for both OpenCL compute and Vulkan/OpenGL gaming on Ubuntu Linux.

7 February 2019 - 80 Comments
NVIDIA GeForce GTX 760/960/1060 / RTX 2060 Linux Gaming & Compute Performance

The NVIDIA GeForce RTX 2060 is shipping today as the most affordable Turing GPU option to date at $349 USD. Last week we posted our initial GeForce RTX 2060 Linux review and followed-up with more 1080p and 1440p Linux gaming benchmarks after having more time with the card. In this article is a side-by-side performance comparison of the GeForce RTX 2060 up against the GTX 1060 Pascal, GTX 960 Maxwell, and GTX 760 Kepler graphics cards. Not only are we looking at the raw OpenGL, Vulkan, and OpenCL/CUDA compute performance between these four generations, but also the power consumption and performance-per-Watt.

15 January 2019 - 13 Comments
PlaidML Deep Learning Framework Benchmarks With OpenCL On NVIDIA & AMD GPUs

Pointed out by a Phoronix reader a few days ago and added to the Phoronix Test Suite is the PlaidML deep learning framework that can run on CPUs using BLAS or also on GPUs and other accelerators via OpenCL. Here are our initial benchmarks of this OpenCL-based deep learning framework that is now being developed as part of Intel's AI Group and tested across a variety of AMD Radeon and NVIDIA GeForce graphics cards.

14 January 2019 - 5 Comments
NVIDIA GeForce RTX 2060 Linux Performance From Gaming To TensorFlow & Compute

Yesterday NVIDIA kicked off their week at CES by announcing the GeForce RTX 2060, the lowest-cost Turing GPU to date at just $349 USD but aims to deliver around the performance of the previous-generation GeForce GTX 1080. I only received my RTX 2060 yesterday for testing but have been putting it through its paces since and have the initial benchmark results to deliver ranging from the OpenGL/Vulkan Linux gaming performance through various interesting GPU compute workloads. Also, with this testing there are graphics cards tested going back to the GeForce GTX 960 Maxwell for an interesting look at how the NVIDIA Linux GPU performance has evolved.

8 January 2019 - 35 Comments
Linux Gaming Benchmarks For The ASUS TURBO-RTX2070-8G

With having a EVGA GeForce RTX 2070 XC GAMING retail graphics card fail on me, I ended up buying an ASUS TURBO-RTX2070-8G. The benefit of this ASUS GeForce RTX 2070 graphics card is that at times can be found for as low as $499 USD, in line with the cheapest RTX 2070 options and lower than many of the other RTX 2070 AIB models and certainly the RTX 2070 Founder's Edition at $599 USD. Should you be considering the ASUS TURBO-RTX2070-8G, here are some benchmarks on Ubuntu Linux.

5 January 2019 - 6 Comments
The GPU Compute Performance From The NVIDIA GeForce GTX 680 To TITAN RTX

A few days back we posted initial Linux benchmarks of the NVIDIA TITAN RTX graphics card, the company's newest flagship Titan card shipping as of a few days ago. That initial performance review included a look at the TensorFlow performance and other compute tests along with some Vulkan Linux gaming benchmarks. In this article is a look at a more diverse range of GPU compute benchmarks while testing thirteen NVIDIA graphics cards going back to the GTX 680 Kepler days.

25 December 2018 - 9 Comments
Initial Linux Benchmarks Of The NVIDIA TITAN RTX Graphics Card For Compute & Gaming

Yesterday I unexpectedly found my hands on a NVIDIA TITAN RTX graphics card as the company's newest Titan graphics card built upon the Turing architecture and is now available via retail channels at $2499 USD. Here is an initial look at the NVIDIA TITAN RTX performance under Ubuntu Linux with a variety of compute workloads (including TensorFlow) as well as for entertainment are some Vulkan gaming benchmarks.

22 December 2018 - 18 Comments
NVIDIA GeForce RTX 2080 Linux Gaming Benchmarks

While we have delivered many Linux benchmarks the past number of weeks from the GeForce RTX 2070 and GeForce RTX 2080 Ti, up until recently we didn't have access to the RTX 2080 that is the card positioned between those two current consumer Turing graphics cards. In kicking off our RTX 2080 Linux benchmarking, here is a look at the Linux gaming performance compared to an assortment of AMD Radeon and NVIDIA GeForce graphics cards tested on Ubuntu Linux while in the days ahead will be the OpenCL/CUDA tests and more.

6 December 2018 - 10 Comments
NVIDIA GeForce RTX 2070 Linux Gaming Benchmarks

Last week following the launch of the RTX 2070 Turing graphics cards, I carried out some initial RTX 2070 compute benchmarks including of TensorFlow and more common OpenCL/CUDA workloads. The GPU compute performance for this $499+ Turing GPU was quite good and especially for INT16 test cases often beating the GTX 1080 Ti. Available now are the Linux gaming benchmarks for the GeForce RTX 2070 compared to an assortment of other NVIDIA GeForce and AMD Radeon graphics cards on Ubuntu 18.10.

22 October 2018 - 17 Comments
NVIDIA GeForce RTX 2070 OpenCL, CUDA, TensorFlow GPU Compute Benchmarks

Here are the first of our benchmarks for the GeForce RTX 2070 graphics card that launched this week. In our inaugural Ubuntu Linux benchmarking with the GeForce RTX 2070 is a look at the OpenCL / CUDA GPU computing performance including with TensorFlow and various models being tested on the GPU. The benchmarks are compared to an assortment of available graphics cards and also include metrics for power consumption, performance-per-Watt, and performance-per-dollar.

18 October 2018 - 10 Comments
NVIDIA GeForce RTX 2080 Ti To GTX 980 Ti TensorFlow Benchmarks With ResNet-50, AlexNet, GoogLeNet, Inception, VGG-16

For those curious about the TensorFlow performance on the newly-released GeForce RTX 2080 series, for your viewing pleasure to kick off this week of Linux benchmarking is a look at Maxwell, Pascal, and Turing graphics cards in my possession when testing the NGC TensorFlow instance on CUDA 10.0 with the 410.57 Linux driver atop Ubuntu and exploring the performance of various models. Besides the raw performance, the performance-per-Watt and performance-per-dollar is also provided.

8 October 2018 - 32 Comments
Ethereum Crypto Mining Performance Benchmarks On The GeForce RTX 2080 Ti

Over the past few days since receiving the RTX 2080 Ti "Turing" graphics card I have been running many different Linux benchmarks on this card, but one area I hadn't explored until having the time this weekend was to checkout the cryptocurrency mining potential, which I tried out with the CUDA support in Ethereum's Ethminer.

23 September 2018 - 65 Comments
NVIDIA GeForce RTX 2080 Ti Shows Very Strong Compute Performance Potential

Besides the new GeForce RTX 2080 series being attractive for developers wanting to make use of new technologies like RTX/ray-tracing, mesh shaders, and DLSS (Deep Learning Super Sampling), CUDA and OpenCL benchmarking so far on the GeForce RTX 2080 Ti is yielding impressive performance -- even outside of the obvious AI / deep learning potential workloads with the Turing tensor cores. Here are some benchmarks looking at the OpenCL/CUDA performance on the high-end Maxwell, Pascal, and Turing cards as well as an AMD Radeon RX Vega 64 for reference. System power consumption, performance-per-Watt, and performance-per-dollar metrics also round out this latest Ubuntu Linux GPU compute comparison.

21 September 2018 - 20 Comments
NVIDIA GeForce GTX 680 To RTX 2080 Ti Graphics/Compute Performance

Yesterday were the initial NVIDIA GeForce RTX 2080 Ti Linux benchmarks based upon my early testing of this high-end Turing graphics card paired with their new 410 Linux graphics driver. For your viewing pleasure today is a look at how the RTX 2080 Ti compares to the top-end cards going back to Kepler... Or, simply put, it's the GeForce GTX 680 vs. GTX 780 Ti vs. 980 Ti vs. 1080 Ti vs. 2080 Ti comparison with OpenGL and Vulkan graphics tests as well as some initial OpenCL / CUDA tests but more Turing GPU compute tests are currently being conducted. For making this historical comparison more interesting are also power consumption and performance-per-Watt metrics.

20 September 2018 - 38 Comments

298 graphics cards articles published on Phoronix.