With the Gallium3D status update
provided during XDS 2008
and other recent work for this advanced 3D graphics architecture (such as enhanced debugging capabilities
), we were left to wonder where the generic GPU video decoding is at in its implementation. Today we have an answer.
When using a modern CPU
, offloading the video decoding to the graphics processor of standard definition video can be somewhat irrelevant, but to facilitate this there is the X-Video Motion Compensation
API. XvMC is an extension to the X-Video
interface but with support for offloading motion compensation and IDCT for MPEG streams to the graphics processor. Work though is underway to have XvMC support more video standards
and talking with Keith Packard
earlier this month he still sees this happening since VA-API
isn't ready for the limelight. One of the other problems though with XvMC is its limited adoption.
Intel has open-source XvMC support
for their newer hardware and this X-Video Motion Compensation support continues to be refined
. On the NVIDIA side though, their binary driver has supported XvMC but that was dropped with the GeForce 8 series
, which means anyone with a newer GPU is left without any of this GPU video acceleration. The open-source Nouveau
driver developers though are working on XvMC for their reverse-engineered driver.
Owners of ATI/AMD hardware using either the open or closed source drivers have been left without any form of XvMC support. However, AMD is working on UVD2 support for Linux
with accelerating standard and high-definition video playback. Through their newly-introduced XvBAW library it also looks like they are working on XvMC support. The open-source ATI drivers right now are left in the dark.
Wouldn't it be nice though if XvMC would "just work" across all video hardware? As part of his 2008 Google Summer of Code project, Younes Manton has been trying to develop a universal implementation for GPU-accelerated video decoding. As we shared in an earlier update
, Younes has been developing a generic video decoding mechanism that would work on any driver using the Gallium3D framework
or even its soft-pipe driver for that matter. He is writing an XvMC front-end for Gallium3D that handles much of the video decoding using the shaders found on a graphics card.
The last project update was over a month ago and since then the Google Summer of Code has officially ended, but there was no accompanying update. However, we've heard back from Younes Manton on where he is at with his generic GPU-accelerated video decoding work and what's left to be accomplished. In an email to Phoronix, Younes shares that his project was successful and passed GSoC. He has fully implemented the XvMC API with the exception of interlaced videos and sub-pictures. While some optimizations have also been made to the decoding process at a low-level, rendering is still slow. Younes explains that the number of triangles, fill-rate, and tex-fetch numbers are all within the limits for the hardware being used, but the problem may be originated with driver or memory management problems.
Younes is also exploring hardware decoding where possible for IDCT. Stephane Marchesin has been working on this project too and they hope to push some NV40 code into the Gallium3D driver. Finally, this generic GPU video code is ready to be moved from Nouveau to Mesa. While the Google Summer of Code has ended, Younes Manton is still continuing to work on this code.
It's certainly nice to see this project progressing, but it's unfortunate that the rendering process still isn't at a sufficient speed. We hope this will change though as this all has come about in just the past couple of months.