Subject:
"vdpau"
Category:
GPU Tools
Sub-Category:
Graphics Driver Developers
Status:
Pending
Ticket Details:
I'm very dissappointed that there is no support for vdpau (or similar) for your gpu's in f.eks XBMC. I just bought a ATI HD card, but it now turns out to be a was a waste of money. I can't play h264 files with any hardware support from the gpu. Fortunately the card was cheap. I have always gone for ATI cards, but in the future will go for Nvidia if you don't go for open source code and let developers take advantage of the potential in the gpu.
Comments:
DEVREL (10/01/2009 1:16 PM)
No plans to support it now or in the foreseeable future as there was no interest from selected ISV’s working on such projects.
Announcement
Collapse
No announcement yet.
NVIDIA Releases Standalone VDPAU Library
Collapse
X
-
- bitstream decode : not practical for shaders, inherently single-thread
- reverse entropy : not considered practical for shaders but not sure if anyone has really tried
Leave a comment:
-
Originally posted by greg View PostWell, one question remains: how much of the usual (H.264/VC-1) video decoding pipe can be sensibly accelerated with shaders?
DECODE
- bitstream decode
- reverse entropy
- inverse transform
- motion comp
- deblocking
RENDER
- colour space conversion
- deinterlacing
- scaling
- post-filtering
... then you get something like :
- bitstream decode : not practical for shaders, inherently single-thread
- reverse entropy : not considered practical for shaders but not sure if anyone has really tried
- inverse transform : doable on shaders but not a great fit and probably not worth it
- motion comp : good fit for shaders
- deblocking : good fit for shaders
The good news is that the last two steps are usually the most computationally expensive as well, so accelerating those stages on GPU should make a big difference in CPU utilization.
If you look at page 5 of this (2005) paper you can see a rough breakdown of where the CPU cycles were going at the time.
I believe that paper lumped bitstream decode in with reverse entropy.
You generally want to pick a point in the pipe and accelerate everything after that, in order to avoid having to push data back and forth between CPU and GPU. Since all of the subsequent steps (scaling, colour space conversion, post-filtering, de-interlacing) are usually done on GPU anyways this all works nicely.Last edited by bridgman; 19 September 2009, 05:17 PM.
Leave a comment:
-
Why use shaders when you got a whole block of the GPU dedicated for H.264 decoding, AMD needs to stop treating Linux users as second class citizens and open up their XvBA api.
Leave a comment:
-
Originally posted by greg View PostWell, one question remains: how much of the usual (H.264/VC-1) video decoding pipe can be sensibly accelerated with shaders?
Leave a comment:
-
Originally posted by myxal View PostLast time I checked, the documentation released by AMD lacked any info for the video decoder.
Edit: Never mind, didn't read until the end. Apparently bridgman did say this in the other thread.
Leave a comment:
-
Originally posted by lbcoder View PostI don't believe that that is an entirely accurate statement. There are different levels of video acceleration... the difference is in how much of the decode process is accelerated. Right now we DO have acceleration -- though only very basic Xv. Playing a full-HD video right now *does* peg any CPU that isn't at least a fairly recent 2-core or better. Offloading a -- lets call it a -- "significant chunk" over to the GPU (even without using the video decoder junk in the GPU) will take a significant chunk of the processing off the CPU to hopefully make HD playback stable on even older 2-core processors (maybe even 1-core's).
Now the question you need to ask yourself is this: how much acceleration do you really need? My "tv computer" is an older X2-3800 that I recently picked up for free + an RHD3650 ($40). HD video playback goes like this;
720P single threaded: fairly OK with the occasional chop. Very watchable.
720P multi-threaded: perfect.
1080P single threaded: unwatchable, drops about 50%.
1080P multi-threaded: fairly OK with the occasional chop. About the same as 720P single threaded.
So how much acceleration do *I* need on this "$40" computer to make 1080P perfect? The answer is *not much*. And that's on old junk.
Here's what bridgman has to say about video decode acceleration:
http://www.phoronix.com/forums/showp...69&postcount=3
I think the problem is with people that spent 150 or more and want to get the most out of their hardware.
Leave a comment:
-
Originally posted by bridgman View PostI think you can take it for granted that video acceleration is coming to open source drivers. While we're not sure yet about UVD, there is already work being done on a shader-based acceleration stack.Originally posted by bridgman View PostCooper has been working on the 300g Gallium3D driver and will be including ymanton's XvMC-over-Gallium3D code as one of the test cases, and zrusin is planning to integrate that XvMC code into the xorg state tracker as well. Once XvMC is working all the key bits of plumbing will be there and adding support for additional video standards (or APIs) will not require much in the way of hardware-specific knowledge.Originally posted by bridgman View PostEven moving MC (the largest consumer of CPU cycles) from CPU to GPU is likely to make the difference between one core not being sufficient to one core being enough for most users.Originally posted by bridgman View PostWasn't this thread supposed to be about NVidia's new wrapper library ?
Leave a comment:
Leave a comment: