1. Computers
  2. Display Drivers
  3. Graphics Cards
  4. Memory
  5. Motherboards
  6. Processors
  7. Software
  8. Storage
  9. Operating Systems


Facebook RSS Twitter Twitter Google Plus


Phoronix Test Suite

OpenBenchmarking.org

NVIDIA's Working On A New Driver Architecture?

NVIDIA

Published on 05 December 2010 04:58 PM EST
Written by Michael Larabel in NVIDIA
22 Comments

When going over the mailing list messages from the past few days regarding concerns over NVIDIA's fence sync patches for X.Org Server 1.10, one of the statements by NVIDIA's James Jones indicates that they are working on a new driver architecture. What though could this new driver architecture hold in store?

In response to Keith Packard's proposed compromise for how to handle the fence synchronization patches, James Jones was rightfully a bit annoyed that this technical feedback had not come until months after NVIDIA put out the original patches and just days before the xorg-server 1.10 merge window is about to close. This is when James came to say that NVIDIA's synchronization rendering problem has been around for years and he would really like to get these patches into xorg-server 1.10 and then with a 1.10 point release or in the 1.11 release to properly clear up the documentation and give the approval (or disapproval) to compositing window manager maintainers and others whether using the DamageSubtractAndTrigger functionality is safe to rely upon.

James' second paragraph of this message though went on to say NVIDIA's driver regardless would be non-compliant with implicit synchronization for the "foreseeable future" as it would simply be too much work to implement it with reasonable performance on their current architecture. However, that's when James goes on to say they are working on a new architecture whereby it would be easier to implement the implicitly synchronized behaviour of the said extension.
- Our drivers are going to be non-compliant in regard to the implicitly synchronized behavior for the foreseeable future. It is truly a mountain of work to implement it with reasonable performance in our current architecture. We're slowly adapting to an architecture where it'd be easier, and we could fix it at that time, but I doubt I'll get time to before then. I can live with being no compliant. Apps have grudgingly accepted the quasi-defined behavior of texture-from-pixmap "loose binding" mode for years to get the performance benefits it offers.

This is the first time we're hearing of a new driver architecture from NVIDIA. Though mentioning that they are "slowly adapting" to it dissolves some hope of it coming soon (the NVIDIA 300.xx driver series?) and that it may be more of an evolutionary step in their driver architecture rather than revolutionary.

We have sent over an email to NVIDIA to try to get more information on this new driver architecture. Seeing as NVIDIA's proprietary Linux driver shares a common code-base with their Windows driver and also their FreeBSD/Solaris support, it does lead us to believe that such a new architecture would continue to be shared across all platforms.

The NVIDIA Linux driver right now is at a near performance parity with the Windows build, again as the code-base is common across all supported platforms aside from the OS-specific bits, and there is a near feature parity too. Though recently the feature support on Linux has dropped behind with the GeForce GTX 400/500 "Fermi" GPUs on Linux currently lacking overclocking support as well as multi-GPU SLI support. NVIDIA's Optimus technology is also currently unsupported on Linux.

If this is a major advancement to NVIDIA's driver architecture, it would be ideal if this work does provide support for kernel mode-setting and ultimately for fostering Wayland support, although NVIDIA has no plans to support Wayland at this time. Of course, it would also be good to properly support the Resize and Rotate (RandR) extension, seeing as it should now be easier to support with RandR 1.4.

Besides that, it's becoming more difficult to think what architectural changes would be needed by their closed-source driver: it's already fast with both 2D and 3D acceleration, there's support for the latest OpenGL and OpenCL specifications, video acceleration is superb with VDPAU (though support for VP8 and other open formats would be nice), and there aren't too many driver problems. It was a year ago that we interviewed Andy Ritger and there were no major breakthroughs then, but perhaps it's time again for another Linux interview (a.k.a. post questions to our forums if there is interest)?

Before someone thinks otherwise, however, this new architecture would not be a migration to the Gallium3D architecture (it's too immature still at this point, would be a regression compared to their current driver architecture, does not support all of NVIDIA's requirements, etc) or likely not some other major open-source breakthrough.

What else would you like to see improved within NVIDIA's proprietary graphics driver? Share with us in the forums while we await any word from NVIDIA.

About The Author
Michael Larabel is the principal author of Phoronix.com and founded the web-site in 2004 with a focus on enriching the Linux hardware experience and being the largest web-site devoted to Linux hardware reviews, particularly for products relevant to Linux gamers and enthusiasts but also commonly reviewing servers/workstations and embedded Linux devices. Michael has written more than 10,000 articles covering the state of Linux hardware support, Linux performance, graphics hardware drivers, and other topics. Michael is also the lead developer of the Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org automated testing software. He can be followed via and or contacted via .
Latest Linux Hardware Reviews
  1. AMD Launches New FX CPUs, Cuts Prices On Existing Processors
  2. Preview: AMD's FX-9590 Eight-Core At Up To 5.0GHz On Linux
  3. Intel Launches The Core i7 5960X, Mighty Powerful Haswell-E CPUs
  4. AMD Radeon R9 290: Gallium3D vs. Catalyst Drivers
Latest Linux Articles
  1. Ondemand vs. Performance CPU Governing For AMD FX CPUs On Linux 3.17
  2. How Intel Graphics On Linux Compare To Open-Source AMD/NVIDIA Drivers
  3. The Fastest NVIDIA GPUs For Open-Source Nouveau With Steam Linux Gaming
  4. Testing For The Latest Linux Kernel Power Regression
Latest Linux News
  1. New Group Calls For Boycotting Systemd
  2. The Features To Find With The Imminent Release Of LLVM/Clang 3.5
  3. Borderlands 2 Is Coming To Linux
  4. The Witcher 2 Ups The Performance More & Works Around Catalyst Bug
  5. Running Gallium3D's LLVMpipe On The Eight-Core 5GHz CPU
  6. Trying Intel OpenCL On Linux For Video Encoding
  7. GSoC 2014 Yielded Some Improvements For Mesa/X.Org This Year
  8. webOS Lives On As LuneOS With New Release
  9. Marek Lands Radeon Gallium3D HyperZ Improvements
  10. Mozilla Firefox 32 Surfaces With HTML5, Developer Changes
Latest Forum Discussions
  1. Lennart Poettering Talks Up His New Linux Vision That Involves Btrfs
  2. nv and xorg.conf under Debian PPC
  3. AMD graphics doesn't work with AMD Catalyst drivers
  4. Best Radeon for a Power Mac G5?
  5. The dangers of Linux kernel development
  6. Updated and Optimized Ubuntu Free Graphics Drivers
  7. AMD Releases UVD Video Decode Support For R600 GPUs
  8. SSD seems slow