NVIDIA's Working On A New Driver Architecture?

Written by Michael Larabel in NVIDIA on 5 December 2010 at 04:58 PM EST. 22 Comments
NVIDIA
When going over the mailing list messages from the past few days regarding concerns over NVIDIA's fence sync patches for X.Org Server 1.10, one of the statements by NVIDIA's James Jones indicates that they are working on a new driver architecture. What though could this new driver architecture hold in store?

In response to Keith Packard's proposed compromise for how to handle the fence synchronization patches, James Jones was rightfully a bit annoyed that this technical feedback had not come until months after NVIDIA put out the original patches and just days before the xorg-server 1.10 merge window is about to close. This is when James came to say that NVIDIA's synchronization rendering problem has been around for years and he would really like to get these patches into xorg-server 1.10 and then with a 1.10 point release or in the 1.11 release to properly clear up the documentation and give the approval (or disapproval) to compositing window manager maintainers and others whether using the DamageSubtractAndTrigger functionality is safe to rely upon.

James' second paragraph of this message though went on to say NVIDIA's driver regardless would be non-compliant with implicit synchronization for the "foreseeable future" as it would simply be too much work to implement it with reasonable performance on their current architecture. However, that's when James goes on to say they are working on a new architecture whereby it would be easier to implement the implicitly synchronized behaviour of the said extension.
- Our drivers are going to be non-compliant in regard to the implicitly synchronized behavior for the foreseeable future. It is truly a mountain of work to implement it with reasonable performance in our current architecture. We're slowly adapting to an architecture where it'd be easier, and we could fix it at that time, but I doubt I'll get time to before then. I can live with being no compliant. Apps have grudgingly accepted the quasi-defined behavior of texture-from-pixmap "loose binding" mode for years to get the performance benefits it offers.

This is the first time we're hearing of a new driver architecture from NVIDIA. Though mentioning that they are "slowly adapting" to it dissolves some hope of it coming soon (the NVIDIA 300.xx driver series?) and that it may be more of an evolutionary step in their driver architecture rather than revolutionary.

We have sent over an email to NVIDIA to try to get more information on this new driver architecture. Seeing as NVIDIA's proprietary Linux driver shares a common code-base with their Windows driver and also their FreeBSD/Solaris support, it does lead us to believe that such a new architecture would continue to be shared across all platforms.

The NVIDIA Linux driver right now is at a near performance parity with the Windows build, again as the code-base is common across all supported platforms aside from the OS-specific bits, and there is a near feature parity too. Though recently the feature support on Linux has dropped behind with the GeForce GTX 400/500 "Fermi" GPUs on Linux currently lacking overclocking support as well as multi-GPU SLI support. NVIDIA's Optimus technology is also currently unsupported on Linux.

If this is a major advancement to NVIDIA's driver architecture, it would be ideal if this work does provide support for kernel mode-setting and ultimately for fostering Wayland support, although NVIDIA has no plans to support Wayland at this time. Of course, it would also be good to properly support the Resize and Rotate (RandR) extension, seeing as it should now be easier to support with RandR 1.4.

Besides that, it's becoming more difficult to think what architectural changes would be needed by their closed-source driver: it's already fast with both 2D and 3D acceleration, there's support for the latest OpenGL and OpenCL specifications, video acceleration is superb with VDPAU (though support for VP8 and other open formats would be nice), and there aren't too many driver problems. It was a year ago that we interviewed Andy Ritger and there were no major breakthroughs then, but perhaps it's time again for another Linux interview (a.k.a. post questions to our forums if there is interest)?

Before someone thinks otherwise, however, this new architecture would not be a migration to the Gallium3D architecture (it's too immature still at this point, would be a regression compared to their current driver architecture, does not support all of NVIDIA's requirements, etc) or likely not some other major open-source breakthrough.

What else would you like to see improved within NVIDIA's proprietary graphics driver? Share with us in the forums while we await any word from NVIDIA.
Related News
About The Author
Michael Larabel

Michael Larabel is the principal author of Phoronix.com and founded the site in 2004 with a focus on enriching the Linux hardware experience. Michael has written more than 20,000 articles covering the state of Linux hardware support, Linux performance, graphics drivers, and other topics. Michael is also the lead developer of the Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org automated benchmarking software. He can be followed via Twitter, LinkedIn, or contacted via MichaelLarabel.com.

Popular News This Week