Announcement

Collapse
No announcement yet.

NVIDIA Releases Standalone VDPAU Library

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    I replied here to minimize the thread-jacking

    Comment


    • #17
      Originally posted by myxal View Post
      Last time I checked, the documentation released by AMD lacked any info for the video decoder.
      On the other paw developers seem to think we don't need anything more. (at least for some level of video decoding acceleration) They'd be doing it with shaders. Someone just has to write it in.
      Edit: Never mind, didn't read until the end. Apparently bridgman did say this in the other thread.

      Comment


      • #18
        Well, one question remains: how much of the usual (H.264/VC-1) video decoding pipe can be sensibly accelerated with shaders?
        Last edited by greg; 09-19-2009, 04:08 PM.

        Comment


        • #19
          Originally posted by greg View Post
          Well, one question remains: how much of the usual (H.264/VC-1) video decoding pipe can be sensibly accelerated with shaders?
          I suspect it's safe to assume enough that it shows. Even MC (post-processing which is also afaik part of the pipeline) already drops CPU usage quite a bit, doing decoding with shaders should help even more. Numbers available when someone writes it in.

          Comment


          • #20
            Why use shaders when you got a whole block of the GPU dedicated for H.264 decoding, AMD needs to stop treating Linux users as second class citizens and open up their XvBA api.

            Comment


            • #21
              Originally posted by greg View Post
              Well, one question remains: how much of the usual (H.264/VC-1) video decoding pipe can be sensibly accelerated with shaders?
              If we break the playback pipe into...

              DECODE
              - bitstream decode
              - reverse entropy
              - inverse transform
              - motion comp
              - deblocking
              RENDER
              - colour space conversion
              - deinterlacing
              - scaling
              - post-filtering

              ... then you get something like :

              - bitstream decode : not practical for shaders, inherently single-thread

              - reverse entropy : not considered practical for shaders but not sure if anyone has really tried

              - inverse transform : doable on shaders but not a great fit and probably not worth it

              - motion comp : good fit for shaders

              - deblocking : good fit for shaders

              The good news is that the last two steps are usually the most computationally expensive as well, so accelerating those stages on GPU should make a big difference in CPU utilization.

              If you look at page 5 of this (2005) paper you can see a rough breakdown of where the CPU cycles were going at the time.

              http://ati.amd.com/products/pdf/h264_whitepaper.pdf

              I believe that paper lumped bitstream decode in with reverse entropy.

              You generally want to pick a point in the pipe and accelerate everything after that, in order to avoid having to push data back and forth between CPU and GPU. Since all of the subsequent steps (scaling, colour space conversion, post-filtering, de-interlacing) are usually done on GPU anyways this all works nicely.
              Last edited by bridgman; 09-19-2009, 05:17 PM.

              Comment


              • #22
                - bitstream decode : not practical for shaders, inherently single-thread
                - reverse entropy : not considered practical for shaders but not sure if anyone has really tried
                Hm, just as I thought. Well, this sucks a lot for H.264, CABAC is quite a beast.

                Comment


                • #23
                  Subject:
                  "vdpau"

                  Category:
                  GPU Tools

                  Sub-Category:
                  Graphics Driver Developers

                  Status:
                  Pending

                  Ticket Details:

                  I'm very dissappointed that there is no support for vdpau (or similar) for your gpu's in f.eks XBMC. I just bought a ATI HD card, but it now turns out to be a was a waste of money. I can't play h264 files with any hardware support from the gpu. Fortunately the card was cheap. I have always gone for ATI cards, but in the future will go for Nvidia if you don't go for open source code and let developers take advantage of the potential in the gpu.
                  Comments:




                  DEVREL (10/01/2009 1:16 PM)


                  No plans to support it now or in the foreseeable future as there was no interest from selected ISV’s working on such projects.

                  Comment

                  Working...
                  X