AMD's X-Video Bitstream Acceleration
Phoronix: AMD's X-Video Bitstream Acceleration
In early September we shared that UVD2 and XvMC is coming to Linux and that two new library files had begun shipping with the ATI Catalyst driver: AMDXvBA and XvBAW. Earlier this month the Unified Video Decoding 2 (UVD2) support was then enabled by default in the Catalyst 8.10 driver. These video acceleration improvements to the ATI Linux driver aren't exactly end-user friendly yet, but today we have information on how those interested can begin using the X-Video Motion Compensation extension with their ATI hardware along with what the XvBA extension will provide users in regards to advanced video acceleration that is very similar to Microsoft's DirectX Video Acceleration.
Well, it sounds interesting to say the least. By how much will it reduce the CPU usage if I'm playing an HD video? (For example, a 1280x720 H264 video)
It still wouldn't matter to me as I don't have any 4000 card.
If it does bitstream acceleration, it will take virtually the entire video decode load off your cpu. In other words, the only work your cpu will be doing is formatting the bitstream to send it to the gpu and doing audio decode.
good news. With such a tempos, official AMD driver for Linux will soon achieve its Windows' performance ;-)
I have one question about DirectX/OpenGL... To my vision, both are high-level APIs, and the functions in them, perform some other call to the video driver, then.
[App] -> D3D call -> [D3D Lib] -> another call, acceptable by driver -> [DRIVER] -> final bits, sent to HW -> [Video card]
Is it correct understanding?
What I would like to learn is that "another call, acceptable by driver" -- is its format common for all the video cards (I believe so, because nobody requires me to re-install DirectX, if I buy another card, issued _after_ the installed version of Direct3D was released...)
Another question: how much abstraction is there in the "another call, acceptable by driver" -- is it too close to the binary HW calls?...
I just wondering why Wine emulates Direct3D via OpenGL? Isn't it's simpler just to send the same calls to the Driver, which are sent by MS's DirectX?
Last edited by mityukov; 10-29-2008 at 12:06 PM.
It is more then sad if i understand it correctly.
What I understand is that today, there is no option to decode HD movies on GPU.
Recently i have installed Ubuntu on HP's xw4200 which is powered by Intel Pentium 4 551 @ 3.4GHz.
This machine is incapable of smooth playback of 720p movie.
Hate the idea that i'll have to install XP Media Center
Don't know what to think of this. Generally I think it's good that there finally is a solution for video acceleration (for more than just MPEG-2). But the way it's done, I just can't like it. It's put in as a closed-sourced feature that will probably not be opened up or be shared with anyone. So what then? Each graphics card manufactorer is going to code their own video acceleration API? Well that will be a nice mess. I would have prefered if AMD would have worked together with Intel and Nvidia to get a proper, standard implementation for video acceleration in Linux. With VA-API there at least a tiny something to build on, so I don't understand why AMD just didn't finish this. Of course it's easier to just "reuse" the windows bits, but normally that never works that well, if just take parts from one OS and put them 1:1 into another one. I would say Linux is to different from Windows for this.
I'm afraid that this road would take considerable more time than the current path. Hopefully a future standard for linux will bring benefits for amd-ati so they consider to build on that. For now its the most pragmatic aproach they could choose in from my point of views. In the long run there will be a standard for sure.
Originally Posted by bash
Will using XvMc remove video tearing?
sad to know that like crossfire, features are introduced only for the newest cards...
would be even more sad if that feature will not be backported!