After weeks of anticipation, AMD's high-end Radeon RX 480 "Polaris" graphics card is officially launching today! This graphics card starts at just $199 USD (or $239 USD for the 8GB version) and has day-one Linux support! There's available open-source driver support as well as an AMDGPU-PRO update that's expected today for those wanting to make use of this newer hybrid Linux driver stack. I've been testing the Radeon RX 480 under Linux the past week under both driver stacks and have my initial results to share this morning.
If you were amazed by the GeForce GTX 1080 performance under Linux but its ~$699 USD price-tag is too much to handle, the GeForce GTX 1070 is now shipping for $399~449 USD. NVIDIA sent over a GeForce GTX 1070 and I've been putting it through its paces under Linux with a variety of OpenGL, OpenCL, and Vulkan benchmarks along with CUDA and deep learning benchmarks. Here's the first look at the GeForce GTX 1070 performance under Ubuntu Linux.
Last week I published the first Linux review of the GeForce GTX 1080 followed by some performance-per-Watt and OpenGL results from the GTX 1080 going as far back as the 9800GTX, among other interesting follow-up tests with OpenGL/Vulkan/OpenCL. Since then one of the most popular requests has been for doing some deep learning benchmarks on the GTX 1080 along with some CUDA benchmarks, for those not relying upon OpenCL for open GPGPU computing. Here are some raw performance numbers as well as performance-per-Watt in the CUDA space.
Last week when posting my initial NVIDIA GeForce GTX 1080 Linux review the Radeon Linux performance numbers I included were from the latest open-source driver stack, since that's what most Phoronix readers seem interested in as of late given the rapid progress recently of OpenGL 4.x support inside Mesa, the hybrid driver stack also using the AMDGPU kernel driver, etc. But some people expressed curiosity over the AMDGPU-PRO performance relative to NVIDIA particularly with their new GTX 1080 graphics processor. So here is a fresh NVIDIA vs. AMDGPU-PRO graphics card comparison on Linux.
Now that my initial GeForce GTX 1080 Linux review is out the door, I spent this weekend working on a "fun" comparison out of curiosity to see how the raw OpenGL and OpenCL performance has improved over the generations going back to the once-powerful GeForce 9800GTX plus including the top-end cards of the GeForce 600/700/900 Kepler and Maxwell series too.
$699 USD is a lot to spend on a graphics card, but damn she is a beauty. Last month NVIDIA launched the GeForce GTX 1080 as the current top-end Pascal card and looked great under Windows while now finally having my hands on the card the past few days I've been putting it through its paces under Ubuntu Linux with the major open APIs of OpenGL, OpenCL, Vulkan, and VDPAU. Not only is the raw performance of the GeForce GTX 1080 on Linux fantastic, but the performance-per-Watt improvements made my jaw drop more than a few times. Here are my initial Linux results of the Gigabyte GeForce GTX 1080 Founder's Edition.
In part due to the Phoronix 12th birthday this week with running various historical performance comparisons and other interesting benchamrks and in part due to prepping for some long-term comparison data to the Radeon RX 480 launch later this month, for your viewing pleasure this morning are benchmarks testing a variety of graphis cards going back to the Radeon HD 3000 (RV600) series up through the Radeon R9 Fury (Fiji) graphics cards. Enjoy this fun article focusing primarily on the OpenGL performance under Linux over the several generations of ATI/AMD GPUs along with calculating the performance-per-Watt.
Last week I published a 16-way NVIDIA GeForce performance comparison on Linux looking at the OpenGL performance evolution from the GeForce 9800GTX to the GeForce GTX 980 Ti / TITAN X, in getting ready to compare the long-term NVIDIA Linux performance to Pascal. This week I've done similar tests on the AMD Radeon side and compared these OpenGL performance and power consumption / performance-per-Watt numbers to NVIDIA.
Similar to this week's article of looking at the OpenGL performance from the GeForce 9800GTX through GeForce GTX 980 Ti and TITAN X in preparation for Pascal Linux testing ahead, today I am doing a similar comparison while looking at the OpenCL compute performance. For thirteen NVIDIA GeForce graphics cards from Fermi to Maxwell I ran a popular OpenCL benchmark while comparing not only the raw performance but also the performance-per-Watt.
In preparing to hopefully test the GeForce GTX 1070/1080 "Pascal" graphics cards under Linux in the days ahead, I've been re-testing my collection of available NVIDIA GeForce graphics cards going back to the GeForce 9800GTX up through the Maxwell-based GeForce GTX 980 Ti and GTX TITAN X. Besides looking at the OpenGL performance at 1080p and 4K, I've also been recording the power metrics and performance-per-Watt data.
At the end of January NVIDIA rolled out the GeForce GT 710. This isn't some shiny new low-end Maxwell card, but rather from the Kepler lineage and retails for under $50 USD as a discrete solution to compete with integrated Intel and AMD graphics. Here are some initial benchmarks of a passively-cooled ASUS GeForce GT 710 under Linux.
Earlier this week I carried out an OpenGL performance comparison of NVIDIA GPUs going back 10 years that included 27 different graphics cards from the GeForce 8 series through the latest-generation GeForce 900 Maxwell graphics cards. In this weekend article are some complementary tests from this comparison with the OpenGL benchmarks at 1920 x 1080.
With having out most of my NVIDIA graphics cards earlier this week due to running the 27-way OpenGL and performance-per-Watt comparison on NVIDIA graphics cards going back a decade, I took the opportunity to also run a smaller, fresh OpenCL/CUDA GPU compute comparison on various recent NVIDIA GPUs.
Curious how the raw OpenGL performance and power efficiency has improved going back a decade to the GeForce 8 days? In this article is a 27-way graphics card comparison testing graphics cards from each generation going from the GeForce 8 series through the GeForce GTX 900 series and ending with the $999 GeForce GTX TITAN X. If you are interested in how graphics card performance has evolved, this is a fun must-read article.
What's the best way to beat the winter blues? Benchmarking, of course! For starting off our 2016 of graphics card benchmarking under Linux, I've been working on a large round-up of re-testing AMD Radeon graphics cards from the HD 2900XT (R600) graphics card through the latest R9 Fury (Fiji) graphics card while running Ubuntu and using the very latest open-source graphics driver stack. Here's an interesting look at how the OpenGL graphics performance has evolved on the AMD side over the past decade while also looking at the performance-per-Watt.
If you are wanting to buy an AMD Radeon or NVIDIA GeForce graphics card this holiday season, here is a fresh round-up of thirteen different graphics cards using the latest AMD/NVIDIA drivers. Beyond just running several Linux OpenGL game tests -- including some Steam tests -- these results also have the performance-per-dollar benchmark results computed too for finding the best value for 1080p Linux gaming this season.
With having just added some new OpenCL/CUDA benchmarks to the Phoronix Test Suite and OpenBenchmarking.org, I took this opportunity to run a variety of OpenCL/CUDA GPGPU tests on a wide-range of NVIDIA GeForce graphics cards.
Earlier this week I posted a graphics card comparison using the open-source drivers and looking at the best value and power efficiency. In today's article is a larger range of AMD Radeon and NVIDIA GeForce graphics cards being tested under a variety of modern Linux OpenGL games/demos while using the proprietary AMD/NVIDIA Linux graphics drivers to see how not only the raw performance compares but also the performance-per-Watt, overall power consumption, and performance-per-dollar metrics.
While we routinely run performance comparisons at Phoronix looking at the OpenGL performance on the latest open-source Linux drivers with a variety of different graphics cards, in this article we're not focusing only on the raw performance but also what graphics cards on the latest Radeon/Nouveau drivers deliver the best power efficiency and value (performance-per-dollar). Here's a look at a mixture of modern AMD Radeon and NVIDIA GeForce graphics cards with Mesa 11.1-devel, LLVM 3.8 SVN, and the Linux 4.3 development kernel.
Following last week's NVIDIA GeForce GTX 950 launch I took the current complete NVIDIA desktop line-up of Maxwell GPUs and ran a second set of Linux OpenGL gaming tests on each of them while this time looking closely at the performance-per-dollar and performance-per-Watt performance. Here's the look at these NVIDIA Linux results if you're wanting to find the graphics processor delivering the best value as a Linux gamer.
NVIDIA this morning is announcing the GeForce GTX 950, which they are advertising as the successor to the GeForce GTX 650 that's still one of the most commonly used graphics cards by gamers. The GeForce GTX 950 is going to retail for less than $200 while claiming to deliver three times the performance of the GTX 650 and twice the performance efficiency of this former mid-range Kepler graphics card. The past few days I've been testing out the EVGA GeForce GTX 950 to great success under Linux.
Intel's Core i5 6600K and i7 6700K processors released earlier this month feature HD Graphics 530 as the first Skylake graphics processor. Given that Intel's Open-Source Technology Center has been working on open-source Linux graphics driver support for over a year for Skylake, I've been quite excited to see how the Linux performance compares for Haswell and Broadwell as well as AMD's APUs on Linux. In this article is the first of these OpenGL benchmarks comparing the Core i5 6600K to other offerings from Intel and AMD.
For the past few weeks I've been extensively testing the NVIDIA GeForce GTX 980 Ti on Linux and it's been a rather pleasant experience. Compared to the troubles with the R9 Fury on Catalyst Linux, the GTX 980 Ti has been a pleasant experience and yielding terrific results, assuming you're okay with using NVIDIA's proprietary driver.
When AMD announced the Radeon R9 Fury line-up powered by the "Fiji" GPU with High Bandwidth Memory, I was genuinely very excited to get my hands on this graphics card. The tech sounded great and offered up a lot of potential, and once finally finding an R9 Fury in stock, shelled out nearly $600 for this graphics card. Unfortunately though, thanks to the current state of the Catalyst Linux driver, the R9 Fury on Linux is a gigantic waste for OpenGL workloads. The R9 Fury results only exemplifies the hideous state of AMD's OpenGL support for their Catalyst Linux driver with a NVIDIA graphics card costing $200 less consistently delivering better gaming performance.
Being in the middle of working on Linux reviews for the NVIDIA GeForce GTX 980 Ti and AMD Radeon R9 Fury, there's been a lot of fresh graphics processor benchmarks running this week at Phoronix. As the first of these updated large Linux comparisons on the very latest public drivers, here is a 15-way NVIDIA GeForce and AMD Radeon graphics card comparison when running various Linux games with a 4K resolution.
The latest graphics card we've been testing the past few weeks under Linux is the MSI Radeon R7 370 GAMING 4G. This mid-range graphics card is equipped with a very quiet heatsink fan and will work on both the latest open and closed-source AMD Linux graphics drivers. Of interest to many Linux enthusiasts who are concerned about noise is that with MSI's ZERO FROZR feature, the fans will stop completely while the system is idling or just engaging in light gaming or multimedia tasks.
Earlier this week I posted some interesting Linux graphics benchmarks comparing the open-source Mesa/Gallium3D drivers for the Iris Pro 6200 Graphics on the Intel Core i7-5775C "Broadwell" CPU compared to several discrete graphics cards. Those results were quite interesting with this new socketed Intel CPU able to blow discrete mid-range AMD Radeon graphics cards out of the water on the open-source Linux drivers. Here's the next part of the testing in showing how the Iris Pro 6200 graphics compare to Haswell HD Graphics 4600 and the current top-end APU, the AMD A10-7870K Godavari.
The Intel Iris Pro Graphics 6200 (GT3e) as the fastest Broadwell GPU boasting an eDRAM cache and 48 execution units is a dream for open-source fans. Backed by a fully open-source Linux graphics driver, the Iris Pro Graphics 6200 found on the socketed Core i7 5775C is a dream come true that can compete with mid-range Radeon graphics cards on their open-source driver.
Last year for the 10th Phoronix birthday I did a 60+ GPU comparison with the open-source drivers and a 30-way graphics card comparison with the binary AMD/NVIDIA Linux drivers. With Phoronix turning eleven this week, I did another large graphics card comparison under Linux... The results today aren't as large as last year, but represent most of the latest-generation AMD and NVIDIA hardware while running Ubuntu 15.04. With more games coming to Linux, there's new titles covered in this year's massive comparison including Civilization: Beyond Earth, Metro 2033 Redux, and many others.
Last week NVIDIA unveiled the GeForce GTX TITAN X during their annual GPU Tech Conference. Of course, all of the major reviews at launch were under Windows and thus largely focused on the Direct3D performance. Now that our review sample arrived this week, I've spent the past few days hitting the TITAN X hard under Linux with various OpenGL and OpenCL workloads compared to other NVIDIA and AMD hardware on the binary Linux drivers.
243 graphics cards articles published on Phoronix.