Last month we carried out our fourth annual Linux Graphics Survey in which we sought feedback from the Linux community about the most common graphics drivers and hardware in use, what display/GPU-related features desktop users are most interested in, and collect other metrics to aide developers. Here are the results from this year's survey.
While LLVM 2.8 was just released, we have been curious to see how the latest Low-Level Virtual Machine compiler code affects the performance of the LLVMpipe driver. This is the Gallium3D graphics driver that lives in Mesa and leverages the unique modular LLVM compiler to efficiently handle processing the graphics rendering workload on a modern CPU as a much faster alternative to that of their legacy software rasterizer. To see how much of a performance impact - for better or worse - that LLVM 2.8 has on this open-source software driver we tested it when being built with LLVM 2.6, 2.7, and the 2.8 SVN code.
Not only have we been busy testing Mesa 7.9 with the Intel and ATI/AMD drivers along with the Gallium3D drivers (including LLVMpipe), but the Nouveau driver that continues to be developed by the open-source community for NVIDIA GPUs received a fresh round of tests too. Our first published benchmarks of the Nouveau Gallium3D driver were back in February when it was nearing a decent state in terms of supported features and stability. Its DRM also finally entered the mainline Linux kernel earlier this year thereby allowing many Linux distributions to now use the Nouveau KMS driver even though not many have yet adopted the Gallium3D driver for OpenGL acceleration. We delivered updated Gallium3D benchmarks in June with the latest Mesa code at that point, but since then there was the integration of a new GLSL compiler into Mesa and many Nouveau changes, so here are our most recent OpenGL benchmarks from this open-source NVIDIA driver.
As we have talked about in numerous articles now and delivered various benchmarks for different graphics processors from those using a classic Mesa DRI driver to the newer NVIDIA/ATI hardware with Gallium3D support, Mesa 7.9 brings a lot to the table. There are many new features to be found in Mesa 7.9 for all drivers, but in this article, we are specifically looking to see how the OpenGL performance of the classic R600 driver has changed compared to Mesa 7.7 and Mesa 7.8.
With Mesa 7.9 just around the corner and it sporting a new GLSL compiler, support for new OpenGL extensions, and months worth of other changes to core Mesa and its drivers, we decided to run some benchmarks of the latest Intel Arrandale graphics processor with the past few Mesa releases to see how the performance compares. We also have ATI and Nouveau Mesa benchmarks on the way.
Over the next few weeks there are a number of new Phoronix benchmarks to be published concerning the performance of Mesa 7.9 for both the Mesa classic and Gallium3D drivers from the different GPU vendors. Included in those tests will be new Intel Mesa benchmarks of their only officially supported 3D driver using one of the Arrandale processors, but for those currently missing out on the X Developers' Summit in Toulouse or PhoronixFest at Oktoberfest, here's a bonus article. For this extra round of benchmarking, we took one of the original Intel Atom benchmarks with i945 graphics and ran it with every major Mesa release since Mesa 7.4.
For the past three years we have hosted an annual Linux Graphics Survey in which we ask tens of thousands of users each time their video card preferences, driver information, and other questions about their view of the Linux graphics stack. This year we are hosting the survey once again to allow the development community to get a better understanding of the video hardware in use, what open-source and closed-source drivers are being used, and other relevant information that will help them and the Linux community.
AMD continues to abide by their commitment to provide open-source support for their graphics cards and as proof of that this afternoon they have released their initial hardware acceleration code that supports the ATI Radeon HD 5000 "Evergreen" family of consumer grade graphics processors. While this Evergreen support isn't yet finished and for the time being is targeted towards Linux developers and enthusiasts, you can now play around with your ATI Radeon HD 5000 graphics processor on an open-source driver while having 2D EXA, X-Video, and OpenGL acceleration.
With the imminent release of X.Org Server 1.9, last week we delivered benchmarks of Intel's 2D driver performance with X.Org Server 1.9. In those tests we found Intel's UXA (UMA Acceleration Architecture) performance only changed a bit -- for either better or worse -- with the updated X Server, but today we are looking at the 2D EXA performance using ATI Radeon hardware using this soon-to-be-released X Server.
X.Org Server 1.9 is set to be released as soon as next week, has already been pulled into Ubuntu 10.10, and is part of the X.Org 7.6 katamari. While X.Org Server 1.9 does not bring many exciting end-user changes like previously releases that introduced RandR 1.2, Multi-Pointer X / X Input 2.0, and other new technologies, there are plenty of bug fixes and other minor improvements throughout the X Server. In this article, we are looking at how the Intel DDX driver performance changes when upgrading from X.Org Server 1.8.2 to the latest X.Org Server 1.9 development code.
Last month we tested out Intel's new GLSL compiler for Mesa when running the ATI Radeon classic Mesa and Gallium3D drivers to see how this GL Shading Language compiler designed by Intel employee's for their hardware and open-source driver work for the other open-source drivers, since all of the Mesa drivers will be affected once this "GLSL2" compiler is merged into the Mesa code-base by month's end. The experience using Intel's new shader compiler with the ATI Radeon graphics driver worked fine except for Warsow where serious regressions were visible, but in the other games that are capable of running off Mesa, the experience was fine. What we have been curious to test since then with this new OpenGL shader compiler has been the LLVMpipe driver -- a Gallium3D driver we have been very excited about as it finally provides a better software rasterizer for Linux by leveraging Gallium3D and the Low-Level Virtual Machine (LLVM) compiler infrastructure for accelerating the Mesa state tracker atop a modern multi-core CPU that supports SSE4 instructions. We have now finished running tests of the Intel's GLSL2 branch with the most recent LLVMpipe driver code.
Whether you are an owner of an ATI FirePro V3800 that retails for just over $100 USD, the proud owner of an ATI FirePro V8800 that goes for over $1,300 USD, or any of the FirePro products in-between, you will want to update your graphics driver when AMD puts out their next stable software update. Back in March AMD put out an amazing FirePro Linux driver that increased the performance of their workstation graphics cards already on the market (and the other Evergreen-based workstation cards that entered the market soon after) by an astonishing amount. Our independent tests of this proprietary Linux driver update found that the performance in some workstation applications had increased by up to 59% by simply installing this updated driver while other OpenGL tests had just improved rather modestly with 20%+ gains. AMD though is preparing to release another driver update for Microsoft Windows and Linux that ups their workstation graphics performance even more! We have run some tests of this new beta driver against their older driver with both their low-end and ultra-high-end FirePro products and have found the improvements again to be astonishing.
With Intel developers earlier this week expressing their plans to merge their new GLSL compiler into Mesa by the end of next month, which besides providing various shader compiler optimizations and being a better framework going forward is already set to correct 50+ bugs, we decided to try out this Mesa "GLSL2" compiler. However, as Intel explicitly stated they haven't tested this new GL Shading Language compiler that's been in development for months with any other hardware drivers (or even Gallium3D) besides their own Intel DRI driver, we decided to see how well it works with the open-source Radeon classic and Gallium3D drivers. It ended up being both good and bad.
Last quarter we compared the Catalyst and Mesa driver performance using an ATI Radeon HD 4830 graphics card, compared the Gallium3D and classic Mesa drivers for ATI Radeon X1000 series hardware, and ultimately found that even with the ATI R500 class graphics cards the open-source driver is still playing catch-up to AMD's proprietary Catalyst Linux driver. In this article we have similar tests to show the performance disparity with ATI's much older R300 class hardware. Even with Radeon hardware that has had open-source support much longer, their drivers are not nearly as mature as an outdated Catalyst driver in the same configuration.
Two months ago we published our initial benchmarks of LLVMpipe, the Gallium3D driver that accelerated commands on the CPU rather than any GPU and unlike other Linux software rasterizers is much faster due to leveraging LLVM (the Low-Level Virtual Machine) on the back-end. Since then we have published new ATI Gallium3D driver benchmarks and yesterday put out Nouveau Gallium3D driver benchmarks, so today we are providing updated LLVMpipe driver results to show how well Gallium3D's LLVMpipe driver can accelerate your OpenGL games with a modern processor.
In recent weeks we have published a number of benchmarks showcasing the ATI Gallium3D driver that supports the R300-R500 graphics processors as this open-source driver has been maturing at such an exciting rate with impressive changes and measurable performance gains over a short period of time. This ATI Gallium3D driver in most instances is outperforming the classic Radeon Mesa driver that supports up through the ATI Radeon X1000 series graphics cards. However, how is the Nouveau driver maturing that supports NVIDIA's wide-range of GeForce graphics cards? In February we published some Nouveau Gallium3D benchmarks, but now we have a fresh set of numbers from three different NVIDIA graphics cards and we also compare the Nouveau Gallium3D driver to NVIDIA's proprietary Linux driver.
In this month's AMD Catalyst 10.6 driver update for Linux they rolled out the ATI 2D Acceleration Architecture, which pleased many ATI Radeon customers, but they aren't the only ones working towards improved 2D support. Intel's open-source engineers have been working to optimize their xf86-video-intel DDX driver 2D performance with much of this work being clearly shown in the Intel 2.12 X.Org driver update. Here are some benchmarks showing the significant performance gains brought by this open-source Intel driver.
It has been two years since the ATI Radeon HD 4800 (RV770) series launched so we have gone back since that monumental hardware launch and have re-tested each Catalyst driver release since then to see how the performance has changed for the ATI Radeon HD 4850 graphics card. The Catalyst driver has certainly matured over the course of two years in speeding up the OpenGL performance with this hardware along with bringing new features to their proprietary driver, but it is not exactly smooth sailing.
Earlier this week AMD released the Catalyst 10.6 driver that on the Linux side of the table had finally made use by default of their new 2D acceleration architecture, offered official support for Red Hat Enterprise Linux 5.5, and formalized their OpenGL 3.3/4.0 support. Since the release of the Catalyst 10.6 Linux driver, we have been running a new set of tests on their new ATI 2D acceleration architecture, but the results are not what you may expect when compared to the open-source ATI Linux driver.
A number of weeks back a set of benchmarks were published showing even the latest open-source ATI 3D driver is still no match to an old Catalyst driver and that even Gallium3D lags behind the Catalyst driver for those interested in OpenGL gaming. However, in other areas the open-source ATI driver stack is beginning to win by measurable amounts.
Last week prior to heading over to Germany for LinuxTag, I had ran a new set of ATI R500 Gallium3D benchmarks with an ATI Radeon X1950PRO graphics card and comparing the latest Mesa/Gallium3D graphics driver performance in the Mesa 7.9-devel Git code with both the Gallium3D and classic Mesa DRI drivers to the older Mesa stack found in Ubuntu 10.04 LTS. The ATI "R300g" driver as its known continues to advance, and over the past week this driver has pushed forward even more. Here is another set of ATI Gallium3D tests.
There was a talk last week at LinuxTag in Berlin by Egbert Eich about kernel mode-setting and the DRM (Direct Rendering Manager) graphics stack on Linux. Egbert is, of course, a long-time X developer and openSUSE developer at Novell who was one of the masterminds behind the RadeonHD graphics driver and has worked on various pieces of X over the years. In Egbert's brief KMS talk he briefly covered the history of the Linux graphics stack, the user and kernel-space APIs for DRM mode-setting, and related topics. For those that missed out on his talk, below are his slides.
The past several months have been very exciting in the world of Gallium3D, the new graphics driver architecture for Linux and other operating systems that has been in development for years. This year we have witnessed the emergence of LLVMpipe to accelerate OpenGL commands on the CPU, Nouveau's Gallium3D driver starting to work well, and many other advancements. Over the past few months we have also been pleased with how the "R300g" driver has taken shape with this Gallium3D driver for ATI Radeon R300/400/500 series hardware (up through the Radeon X1000 series) stabilizing, performing well, and advancing beyond the classic Mesa 3D R300 driver. Today we have a fresh set of benchmarks looking at this ATI Gallium3D driver that soon will become the default.
Last week NVIDIA released their first 256.xx proprietary beta Linux display driver that brought many VDPAU improvements, installer improvements, support for new GLX extensions, various bug-fixes, and other enhancements. However, some user reports have shown the 256.xx driver is actually slower than NVIDIA's current pre-200.xx series drivers and so we have carried out a set of tests to see what things are looking like from within our labs. Our preliminary tests do indeed illustrate a drop in performance when upgrading to this new driver.
Yesterday we reported on VIA's Linux dreams not materializing with their GEM/TTM memory management support still missing even though we are half-way into 2010 -- more than two years after VIA announced its most recent open-source initiative. It turns out, however, for what VIA views as its memory management work is actually done. VIA has inconspicuously handed over some of its code to the OpenChrome developers in order to create a new driver that has been dubbed the "openvia" driver. VIA has supposedly provided the source-code to an X driver plus TTM/GEM DRM, but this new project largely remains a hidden mystery.
Last week we reported that the open-source ATI Linux driver had picked up improved power management in the form of dynamic power management and power management profiles that can be defined by the end-user. With the ATI Linux power management finally coming to fruition within the Linux kernel for its kernel mode-setting / DRM driver, we have decided to take a close look at how this power management support is working in the real world.
The software rasterizer used in Mesa that allows for software acceleration of OpenGL on the CPU without any assistance from the graphics processor has largely been useless. Even with a modern-day, multi-core processor, the performance of Mesa's software rasterizer has been abysmal. The performance of Mesa classic DRI drivers have traditionally been poor anyways compared to the high-performance, proprietary NVIDIA/ATI graphics drivers, but when dealing with just the software rasterizer there really aren't any games or applications that run well. Fortunately, software acceleration on Gallium3D is very much a different story thanks to LLVM.
We have already published a look at the Fedora 13 Beta, delivered ATI Radeon benchmarks atop Fedora 13 Beta, and have other articles on the way covering this new Fedora release, while in this article we are investigating Nouveau's power performance using this newest Fedora release. If you are a mobile user planning to use the Nouveau stack right now, or you care the least bit about energy savings with your desktop, its power consumption alone may rule this open-source driver out as even a current possibility.
With the release of Fedora 13 Beta earlier this week we have been testing out this Red Hat update on a few of our test systems. One area of interest to us has been to see how the open-source graphics are performing with Fedora 13, since after all Red Hat is known to always ship the very latest DRM/Mesa/DDX bits in Fedora due to all of their upstream involvement and this week is also the Fedora 13 Graphics Test Week. We already looked at the direction of Intel graphics with Fedora 13, so our next target was testing out the open-source ATI graphics with this Linux desktop release that is codenamed Goddard. In this article, we have ATI R500 tests using their open-source driver stack as we test out the OpenGL performance and the power consumption, compared to Fedora 12.
Yesterday we delivered benchmarks showing how the open-source ATI Radeon graphics driver stack in Ubuntu 10.04 is comparing to older releases of the proprietary ATI Catalyst Linux driver. Sadly, the latest open-source ATI driver still is no match even for a two or four-year-old proprietary driver from ATI/AMD, but that is with the classic Mesa DRI driver. To yesterday's results we have now added in our results from ATI's Gallium3D (R300g) driver using a Mesa 7.9-devel Git snapshot from yesterday to see how this runs against the older Catalyst drivers.
712 display drivers articles published on Phoronix.