Intel Sandy Bridge VA-API Video Acceleration Performance

Written by Michael Larabel in Display Drivers on 7 March 2011 at 12:05 PM EST. Page 2 of 5. 48 Comments.

Here is all of the video CPU usage data for the different VA-API / X-Video / VDPAU video playback tests with the different graphics adapters.

There are many lines there, but as you can see, the CPU usage with VA-API and VDPAU performance is far lower than with X-Video. This is to no surprise since this common X extension does not offload much work to the GPU so the CPU is left with a much greater burden. However, with the wonderfully fast Core i5 2500K processor, the CPU usage using X-Video is still 7~9% for this quad-core part. Between the different drivers / GPUs, the X-Video performance does not vary much.

If breaking down the data to just look at the Phoronix Test Suite data for Sandy Bridge VA-API and NVIDIA VDPAU for the GeForce 9500GT, we clearly see the CPU usage when using Sandy Bridge with onboard graphics is actually lower than using the Video Decode and Presentation API for Unix on the discrete NVIDIA card. Though the average CPU difference is just 2% vs. 3.2%. What's more noticeable from this data is that there are more dramatic CPU usage spikes over the course of playing "Big Buck Bunny" 1080p H.264 with VDPAU. With VA-API, the CPU topped out at 4.9% (no other CPU work was going on in the background during any of this video testing) while the NVIDIA driver spiked to nearly 13%. These CPU spikes were quite frequent with NVIDIA VDPAU. Too bad we cannot see how AMD's XvBA implementation on the Catalyst driver with the VA-API front-end works, but it is too buggy right now. Ideally, we will see XvBA improvements and applications supporting it directly rather than relying upon the closed-source VA-API front-end, now that AMD has opened up the XvBA API.


Related Articles