$699 USD is a lot to spend on a graphics card, but damn she is a beauty. Last month NVIDIA launched the GeForce GTX 1080 as the current top-end Pascal card and looked great under Windows while now finally having my hands on the card the past few days I've been putting it through its paces under Ubuntu Linux with the major open APIs of OpenGL, OpenCL, Vulkan, and VDPAU. Not only is the raw performance of the GeForce GTX 1080 on Linux fantastic, but the performance-per-Watt improvements made my jaw drop more than a few times. Here are my initial Linux results of the Gigabyte GeForce GTX 1080 Founder's Edition.
In part due to the Phoronix 12th birthday this week with running various historical performance comparisons and other interesting benchamrks and in part due to prepping for some long-term comparison data to the Radeon RX 480 launch later this month, for your viewing pleasure this morning are benchmarks testing a variety of graphis cards going back to the Radeon HD 3000 (RV600) series up through the Radeon R9 Fury (Fiji) graphics cards. Enjoy this fun article focusing primarily on the OpenGL performance under Linux over the several generations of ATI/AMD GPUs along with calculating the performance-per-Watt.
Last week I published a 16-way NVIDIA GeForce performance comparison on Linux looking at the OpenGL performance evolution from the GeForce 9800GTX to the GeForce GTX 980 Ti / TITAN X, in getting ready to compare the long-term NVIDIA Linux performance to Pascal. This week I've done similar tests on the AMD Radeon side and compared these OpenGL performance and power consumption / performance-per-Watt numbers to NVIDIA.
Similar to this week's article of looking at the OpenGL performance from the GeForce 9800GTX through GeForce GTX 980 Ti and TITAN X in preparation for Pascal Linux testing ahead, today I am doing a similar comparison while looking at the OpenCL compute performance. For thirteen NVIDIA GeForce graphics cards from Fermi to Maxwell I ran a popular OpenCL benchmark while comparing not only the raw performance but also the performance-per-Watt.
In preparing to hopefully test the GeForce GTX 1070/1080 "Pascal" graphics cards under Linux in the days ahead, I've been re-testing my collection of available NVIDIA GeForce graphics cards going back to the GeForce 9800GTX up through the Maxwell-based GeForce GTX 980 Ti and GTX TITAN X. Besides looking at the OpenGL performance at 1080p and 4K, I've also been recording the power metrics and performance-per-Watt data.
At the end of January NVIDIA rolled out the GeForce GT 710. This isn't some shiny new low-end Maxwell card, but rather from the Kepler lineage and retails for under $50 USD as a discrete solution to compete with integrated Intel and AMD graphics. Here are some initial benchmarks of a passively-cooled ASUS GeForce GT 710 under Linux.
Earlier this week I carried out an OpenGL performance comparison of NVIDIA GPUs going back 10 years that included 27 different graphics cards from the GeForce 8 series through the latest-generation GeForce 900 Maxwell graphics cards. In this weekend article are some complementary tests from this comparison with the OpenGL benchmarks at 1920 x 1080.
With having out most of my NVIDIA graphics cards earlier this week due to running the 27-way OpenGL and performance-per-Watt comparison on NVIDIA graphics cards going back a decade, I took the opportunity to also run a smaller, fresh OpenCL/CUDA GPU compute comparison on various recent NVIDIA GPUs.
Curious how the raw OpenGL performance and power efficiency has improved going back a decade to the GeForce 8 days? In this article is a 27-way graphics card comparison testing graphics cards from each generation going from the GeForce 8 series through the GeForce GTX 900 series and ending with the $999 GeForce GTX TITAN X. If you are interested in how graphics card performance has evolved, this is a fun must-read article.
What's the best way to beat the winter blues? Benchmarking, of course! For starting off our 2016 of graphics card benchmarking under Linux, I've been working on a large round-up of re-testing AMD Radeon graphics cards from the HD 2900XT (R600) graphics card through the latest R9 Fury (Fiji) graphics card while running Ubuntu and using the very latest open-source graphics driver stack. Here's an interesting look at how the OpenGL graphics performance has evolved on the AMD side over the past decade while also looking at the performance-per-Watt.
If you are wanting to buy an AMD Radeon or NVIDIA GeForce graphics card this holiday season, here is a fresh round-up of thirteen different graphics cards using the latest AMD/NVIDIA drivers. Beyond just running several Linux OpenGL game tests -- including some Steam tests -- these results also have the performance-per-dollar benchmark results computed too for finding the best value for 1080p Linux gaming this season.
With having just added some new OpenCL/CUDA benchmarks to the Phoronix Test Suite and OpenBenchmarking.org, I took this opportunity to run a variety of OpenCL/CUDA GPGPU tests on a wide-range of NVIDIA GeForce graphics cards.
Earlier this week I posted a graphics card comparison using the open-source drivers and looking at the best value and power efficiency. In today's article is a larger range of AMD Radeon and NVIDIA GeForce graphics cards being tested under a variety of modern Linux OpenGL games/demos while using the proprietary AMD/NVIDIA Linux graphics drivers to see how not only the raw performance compares but also the performance-per-Watt, overall power consumption, and performance-per-dollar metrics.
While we routinely run performance comparisons at Phoronix looking at the OpenGL performance on the latest open-source Linux drivers with a variety of different graphics cards, in this article we're not focusing only on the raw performance but also what graphics cards on the latest Radeon/Nouveau drivers deliver the best power efficiency and value (performance-per-dollar). Here's a look at a mixture of modern AMD Radeon and NVIDIA GeForce graphics cards with Mesa 11.1-devel, LLVM 3.8 SVN, and the Linux 4.3 development kernel.
Following last week's NVIDIA GeForce GTX 950 launch I took the current complete NVIDIA desktop line-up of Maxwell GPUs and ran a second set of Linux OpenGL gaming tests on each of them while this time looking closely at the performance-per-dollar and performance-per-Watt performance. Here's the look at these NVIDIA Linux results if you're wanting to find the graphics processor delivering the best value as a Linux gamer.
NVIDIA this morning is announcing the GeForce GTX 950, which they are advertising as the successor to the GeForce GTX 650 that's still one of the most commonly used graphics cards by gamers. The GeForce GTX 950 is going to retail for less than $200 while claiming to deliver three times the performance of the GTX 650 and twice the performance efficiency of this former mid-range Kepler graphics card. The past few days I've been testing out the EVGA GeForce GTX 950 to great success under Linux.
Intel's Core i5 6600K and i7 6700K processors released earlier this month feature HD Graphics 530 as the first Skylake graphics processor. Given that Intel's Open-Source Technology Center has been working on open-source Linux graphics driver support for over a year for Skylake, I've been quite excited to see how the Linux performance compares for Haswell and Broadwell as well as AMD's APUs on Linux. In this article is the first of these OpenGL benchmarks comparing the Core i5 6600K to other offerings from Intel and AMD.
For the past few weeks I've been extensively testing the NVIDIA GeForce GTX 980 Ti on Linux and it's been a rather pleasant experience. Compared to the troubles with the R9 Fury on Catalyst Linux, the GTX 980 Ti has been a pleasant experience and yielding terrific results, assuming you're okay with using NVIDIA's proprietary driver.
When AMD announced the Radeon R9 Fury line-up powered by the "Fiji" GPU with High Bandwidth Memory, I was genuinely very excited to get my hands on this graphics card. The tech sounded great and offered up a lot of potential, and once finally finding an R9 Fury in stock, shelled out nearly $600 for this graphics card. Unfortunately though, thanks to the current state of the Catalyst Linux driver, the R9 Fury on Linux is a gigantic waste for OpenGL workloads. The R9 Fury results only exemplifies the hideous state of AMD's OpenGL support for their Catalyst Linux driver with a NVIDIA graphics card costing $200 less consistently delivering better gaming performance.
Being in the middle of working on Linux reviews for the NVIDIA GeForce GTX 980 Ti and AMD Radeon R9 Fury, there's been a lot of fresh graphics processor benchmarks running this week at Phoronix. As the first of these updated large Linux comparisons on the very latest public drivers, here is a 15-way NVIDIA GeForce and AMD Radeon graphics card comparison when running various Linux games with a 4K resolution.
The latest graphics card we've been testing the past few weeks under Linux is the MSI Radeon R7 370 GAMING 4G. This mid-range graphics card is equipped with a very quiet heatsink fan and will work on both the latest open and closed-source AMD Linux graphics drivers. Of interest to many Linux enthusiasts who are concerned about noise is that with MSI's ZERO FROZR feature, the fans will stop completely while the system is idling or just engaging in light gaming or multimedia tasks.
Earlier this week I posted some interesting Linux graphics benchmarks comparing the open-source Mesa/Gallium3D drivers for the Iris Pro 6200 Graphics on the Intel Core i7-5775C "Broadwell" CPU compared to several discrete graphics cards. Those results were quite interesting with this new socketed Intel CPU able to blow discrete mid-range AMD Radeon graphics cards out of the water on the open-source Linux drivers. Here's the next part of the testing in showing how the Iris Pro 6200 graphics compare to Haswell HD Graphics 4600 and the current top-end APU, the AMD A10-7870K Godavari.
The Intel Iris Pro Graphics 6200 (GT3e) as the fastest Broadwell GPU boasting an eDRAM cache and 48 execution units is a dream for open-source fans. Backed by a fully open-source Linux graphics driver, the Iris Pro Graphics 6200 found on the socketed Core i7 5775C is a dream come true that can compete with mid-range Radeon graphics cards on their open-source driver.
Last year for the 10th Phoronix birthday I did a 60+ GPU comparison with the open-source drivers and a 30-way graphics card comparison with the binary AMD/NVIDIA Linux drivers. With Phoronix turning eleven this week, I did another large graphics card comparison under Linux... The results today aren't as large as last year, but represent most of the latest-generation AMD and NVIDIA hardware while running Ubuntu 15.04. With more games coming to Linux, there's new titles covered in this year's massive comparison including Civilization: Beyond Earth, Metro 2033 Redux, and many others.
Last week NVIDIA unveiled the GeForce GTX TITAN X during their annual GPU Tech Conference. Of course, all of the major reviews at launch were under Windows and thus largely focused on the Direct3D performance. Now that our review sample arrived this week, I've spent the past few days hitting the TITAN X hard under Linux with various OpenGL and OpenCL workloads compared to other NVIDIA and AMD hardware on the binary Linux drivers.
Last week BioShock: Infinite was finally released for Steam on Linux. Following the premiere of that game for Linux, posted were the AMD Catalyst Linux results for this game followed by a preview of the AMD vs. NVIDIA graphics card results. For those curious about the performance of this game for a broader graphics card selection, here's the BioShock: Infinite results from eighteen different graphics cards while running this new Linux game.
Last week NVIDIA released the GeForce GTX 960, a great $200 GPU for Linux gamers that is based on their new power-efficient Maxwell architecture. On launch-day I delivered some initial performance figures of the full GeForce GTX 900 series line-up along with other graphics cards and following that I did many new NVIDIA Linux GPU tests going back to the GeForce GTX 400 (Fermi) series. Not part of those tests were any AMD Radeon graphics cards while in this article are such numbers in making a new 18-way graphics card comparison with the latest Linux graphics drivers.
This morning NVIDIA is formally announcing the GeForce GTX 960 as the latest Maxwell GPU. The GeForce GTX 960 is a mid-range GPU priced starting out at $200 and comes with a compelling set of features. The past few days I've been testing out the eVGA GeForce GTX 960 2GB graphics card and in this article are some initial performance figures under Linux.
The latest massive set of Linux test data we have to share with Linux gamers and enthusiasts is a look at Counter-Strike: Global Offensive and Team Fortress 2 when using the very newest open-source Radeon graphics driver code. The very latest open-source Radeon driver code tested with these popular Valve Linux games were the Linux 3.18 Git kernel, Mesa 10.4-devel, LLVM 3.6 SVN, and xf86-video-ati 7.5.99. With this bleeding edge code there were sixteen AMD Radeon graphics cards tested from low to high-end and spanning several generations. Beyond looking at the frame-rate results, there's also power consumption, performance-per-Watt, GPU core temperature, and CPU usage to go along with all of these results. Enjoy!
Since last month's Linux review of the GeForce GTX 980 as NVIDIA's newest high-end GPU powered by their Maxwell architecture, many Phoronix readers have been requesting Ubuntu Linux tests of the GTX 970 too. I've now got my hands on an EVGA GeForce GTX 970 and am putting it through its paces today.
238 graphics cards articles published on Phoronix.