NVIDIA GeForce GTX 460 On Linux

Written by Michael Larabel in Graphics Cards on 23 August 2010 at 03:00 AM EDT. Page 9 of 11. 132 Comments.

To sum up things, the $200 GeForce GTX 460 ran the tested games well and with most of the tests it was able to pull a narrow lead over the similarly priced ATI Radeon HD 4890. However, when being run at 2560 x 1600, the Radeon HD 4890 commonly won by a narrow margin. In the most demanding OpenGL Linux test, Unigine Heaven, the Radeon HD 4890 graphics card was the clear winner. This is somewhat interesting as results from the NVIDIA GeForce GTX 460 under Microsoft Windows show the GF104 part having the upper-hand over the Radeon HD 4890 by a small margin. Next up we are looking at the video decoding performance for these graphics cards. Using the Phoronix Test Suite's video-cpu-usage test profile the CPU usage was charted in real-time as a portion of the H.264 1080p version of "The Big Buck Bunny" Blender movie was played back in MPlayer. First, we looked at the CPU usage when using the X-Video interface, which is supported by both the ATI and NVIDIA proprietary drivers along with the open-source drivers, but X-Video does not do a whole lot these days for offloading the video playback work to the graphics processor.

While X-Video is not too useful, with the Intel Core i7 920 CPU overclocked to 3.60GHz the CPU usage was next-to-zero during the entire playback process. While it is a mess to look at, it is the average and peak numbers that are most important. The X-Video CPU usage appeared to be slightly higher with the GeForce GTX 460, but not anything significant and most NVIDIA users will not be using X-Video but rather NVIDIA's flagship VDPAU, the Video Decode and Presentation API for Unix. In the next test we looked at the CPU usage when using VDPAU decoding in MPlayer on the NVIDIA hardware while the ATI graphics cards were run using the VA-API library with the XvBA back-end, which is AMD's method for accelerating video playback on Linux with their proprietary drivers using the UVD2 engine.

In all cases the CPU usage averaged out to be less than 1% when playing back the 1080p H.264 Blender movie with no graphics card being particularly better -- with VDPAU a $20 CPU and $30 GPU is good enough for handling HD video playback. Though what the CPU usage results could not show is that during the video playback process using the XvBA back-end on the ATI Radeon HD 5000 series hardware with Catalyst 10.7 is that there were artifacts coating the screen and the experience would be unusable to anyone interested in actually watching the film.

While the CPU usage was not vastly different between the VA-API/XvBA and VDPAU decoding methods, NVIDIA is the hands-down winner with VDPAU in their proprietary driver. VDPAU has been around much longer than XvBA, works extremely well, continues to be improved upon in NVIDIA's frequent driver updates, can be exposed with a VA-API front-end, works well on mobile devices, and is widely supported in multimedia Linux applications from MythTV to MPlayer. VDPAU supports offloading of the motion compensation, iDCT, VLD, and deblocking operations for MPEG-1, MPEG-2, MPEG-4 ASP, H.264 / MPEG-4 AVC, VC-1, and WMV3/WM9 encoded files. VDPAU also is not commonly known to have playback problems like the incorrect rendering we experienced with XvBA and the other ATI video playback problems commonly talked about in our forums. AMD's XvBA implementation right now is really just a mess with it taking them more than a year after we exposed X-Video Bitstream Acceleration for them to make it usable in the Catalyst driver by not exposing the XvBA API itself but rather requiring another binary blob to be loaded, which comes in the form of a third-party VA-API library that in turn taps the UVD2 engine via XvBA. XvBA supports motion compensation, iDCT, and VLD for MPEG-2, H.264 / MPEG-4 AVC, and VC-1 encoded files.


Related Articles