Announcement

Collapse
No announcement yet.

NVIDIA Releases Standalone VDPAU Library

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • doghearthalo
    replied
    Subject:
    "vdpau"

    Category:
    GPU Tools

    Sub-Category:
    Graphics Driver Developers

    Status:
    Pending

    Ticket Details:

    I'm very dissappointed that there is no support for vdpau (or similar) for your gpu's in f.eks XBMC. I just bought a ATI HD card, but it now turns out to be a was a waste of money. I can't play h264 files with any hardware support from the gpu. Fortunately the card was cheap. I have always gone for ATI cards, but in the future will go for Nvidia if you don't go for open source code and let developers take advantage of the potential in the gpu.
    Comments:




    DEVREL (10/01/2009 1:16 PM)


    No plans to support it now or in the foreseeable future as there was no interest from selected ISV’s working on such projects.

    Leave a comment:


  • greg
    replied
    - bitstream decode : not practical for shaders, inherently single-thread
    - reverse entropy : not considered practical for shaders but not sure if anyone has really tried
    Hm, just as I thought. Well, this sucks a lot for H.264, CABAC is quite a beast.

    Leave a comment:


  • bridgman
    replied
    Originally posted by greg View Post
    Well, one question remains: how much of the usual (H.264/VC-1) video decoding pipe can be sensibly accelerated with shaders?
    If we break the playback pipe into...

    DECODE
    - bitstream decode
    - reverse entropy
    - inverse transform
    - motion comp
    - deblocking
    RENDER
    - colour space conversion
    - deinterlacing
    - scaling
    - post-filtering

    ... then you get something like :

    - bitstream decode : not practical for shaders, inherently single-thread

    - reverse entropy : not considered practical for shaders but not sure if anyone has really tried

    - inverse transform : doable on shaders but not a great fit and probably not worth it

    - motion comp : good fit for shaders

    - deblocking : good fit for shaders

    The good news is that the last two steps are usually the most computationally expensive as well, so accelerating those stages on GPU should make a big difference in CPU utilization.

    If you look at page 5 of this (2005) paper you can see a rough breakdown of where the CPU cycles were going at the time.



    I believe that paper lumped bitstream decode in with reverse entropy.

    You generally want to pick a point in the pipe and accelerate everything after that, in order to avoid having to push data back and forth between CPU and GPU. Since all of the subsequent steps (scaling, colour space conversion, post-filtering, de-interlacing) are usually done on GPU anyways this all works nicely.
    Last edited by bridgman; 19 September 2009, 05:17 PM.

    Leave a comment:


  • monraaf
    replied
    Why use shaders when you got a whole block of the GPU dedicated for H.264 decoding, AMD needs to stop treating Linux users as second class citizens and open up their XvBA api.

    Leave a comment:


  • nanonyme
    replied
    Originally posted by greg View Post
    Well, one question remains: how much of the usual (H.264/VC-1) video decoding pipe can be sensibly accelerated with shaders?
    I suspect it's safe to assume enough that it shows. Even MC (post-processing which is also afaik part of the pipeline) already drops CPU usage quite a bit, doing decoding with shaders should help even more. Numbers available when someone writes it in.

    Leave a comment:


  • greg
    replied
    Well, one question remains: how much of the usual (H.264/VC-1) video decoding pipe can be sensibly accelerated with shaders?
    Last edited by greg; 19 September 2009, 04:08 PM.

    Leave a comment:


  • nanonyme
    replied
    Originally posted by myxal View Post
    Last time I checked, the documentation released by AMD lacked any info for the video decoder.
    On the other paw developers seem to think we don't need anything more. (at least for some level of video decoding acceleration) They'd be doing it with shaders. Someone just has to write it in.
    Edit: Never mind, didn't read until the end. Apparently bridgman did say this in the other thread.

    Leave a comment:


  • bridgman
    replied
    I replied here to minimize the thread-jacking

    Leave a comment:


  • m4rgin4l
    replied
    Originally posted by lbcoder View Post
    I don't believe that that is an entirely accurate statement. There are different levels of video acceleration... the difference is in how much of the decode process is accelerated. Right now we DO have acceleration -- though only very basic Xv. Playing a full-HD video right now *does* peg any CPU that isn't at least a fairly recent 2-core or better. Offloading a -- lets call it a -- "significant chunk" over to the GPU (even without using the video decoder junk in the GPU) will take a significant chunk of the processing off the CPU to hopefully make HD playback stable on even older 2-core processors (maybe even 1-core's).

    Now the question you need to ask yourself is this: how much acceleration do you really need? My "tv computer" is an older X2-3800 that I recently picked up for free + an RHD3650 ($40). HD video playback goes like this;
    720P single threaded: fairly OK with the occasional chop. Very watchable.
    720P multi-threaded: perfect.
    1080P single threaded: unwatchable, drops about 50%.
    1080P multi-threaded: fairly OK with the occasional chop. About the same as 720P single threaded.

    So how much acceleration do *I* need on this "$40" computer to make 1080P perfect? The answer is *not much*. And that's on old junk.

    Here's what bridgman has to say about video decode acceleration:
    http://www.phoronix.com/forums/showp...69&postcount=3
    You make a good point here. We shouldn't spend more than 50 bucks if all you want is to watch HD content.

    I think the problem is with people that spent 150 or more and want to get the most out of their hardware.

    Leave a comment:


  • myxal
    replied
    Originally posted by bridgman View Post
    I think you can take it for granted that video acceleration is coming to open source drivers. While we're not sure yet about UVD, there is already work being done on a shader-based acceleration stack.
    Are you saying we might see UVD, i.e. bitstream acceleration in opensource drivers?
    Originally posted by bridgman View Post
    Cooper has been working on the 300g Gallium3D driver and will be including ymanton's XvMC-over-Gallium3D code as one of the test cases, and zrusin is planning to integrate that XvMC code into the xorg state tracker as well. Once XvMC is working all the key bits of plumbing will be there and adding support for additional video standards (or APIs) will not require much in the way of hardware-specific knowledge.
    Sounds great. I recall there being some limitations on XvMC. Going straight to what I care about and need the stack to provide (note: according to reports on the web, VDPAU with nvidia does this): Postprocessing of the decoded video frames, needed to support current mplayer's implementation of subtitles, OSD, etc. Does XvMC even allow this?
    Originally posted by bridgman View Post
    Even moving MC (the largest consumer of CPU cycles) from CPU to GPU is likely to make the difference between one core not being sufficient to one core being enough for most users.
    The fad now is mobility - how does the power draw compare when using UVD and when using shaders?
    Originally posted by bridgman View Post
    Wasn't this thread supposed to be about NVidia's new wrapper library ?
    Well the library is a wrapper for various implementations and we already know nvidia's implementation (mostly) works. We're just THAT eager to see other implementations, working with hardware unaffected by Bumpgate

    Leave a comment:

Working...
X