Originally posted by rjwaldren
View Post
We normally describe the video playback stack in two parts; decode, and render (or presentation). Decode goes from a bitstream representation to some kind of YCbCr image in the video's native resolution - render/presentation goes from that image to a filtered RGB image in the resolution you want on the screen. The UVD only handles decode functions; everything else is done in shaders anyways.
There are a variety of acceleration APIs available for doing the render operations, including OpenGL, Xv, the bottom end of VA-API and VDPAU, a variety of proprietary APIs, or software rendering to an X11 surface. Other than software rendering, they all use roughly the same amount of CPU, although OpenGL will tend to use a bit more because there is some general purpose code between the API call and the shader invocation. The difference is pretty small though.
The important point here is that you are not "leaving hardware capability unused" - UVD goes from H.264/VC-1 bitstream to YCbCr native resolution, and does IDCT/MC processing for MPEG2. That's all it does on any OS. I believe the same is true for our competitors hardware as well.
I made a long post a few months ago showing the entire stack; will see if I can find that and link to it here.
Originally posted by rjwaldren
View Post
Originally posted by rjwaldren
View Post
Comment