Announcement

Collapse
No announcement yet.

NVIDIA To Create Protocol For VDPAU

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by jscurtu View Post
    Well this will bring high quality, Accelerated Video playback...
    VDPAU can be described as the X Window System equivalent of Microsoft's DxVA (DirectX Video Acceleration) API for Windows.

    I would like to see this as part of Gallium3D, that would be the best way so that all Cards that use a Gallium3D driver would automatic benefit and be able to have vpdau support!
    Thanks jscurtu, but I was (badly) refering to the "standalone VDPAU library" part.

    Comment


    • #22
      Originally posted by _txf_ View Post
      In which case, the limitation then is with the media players and not with VDPAU itself? The vdpau renderer is nowhere near as flexible as the standard renderers in mplayer (I want to be able to add noise).
      Yes, but you can't blame the renderer/video output. Adding noise etc. is usually done with video filters, and traditionally these run on the CPU. For video decoding acceleration this means you'll have to do a costly gpu -> host -> gpu round trip with the decoded frames, and currently MPlayer doesn't support that. If you use software decoding, you can do as much video filtering as you want, even with the VDPAU renderer, of course. In fact the VDPAU renderer can *de*noise, and blur/sharpen in the postprocessing, unfortunately it can't add any noise.

      I believe in the early days ASS subtitles did not work as everything had to be piped into VDPAU in a particular format. I didn't try it out then so I'm just repeating what nvidia and other people have said.
      ASS/SSA subtitles have been supported pretty much from the beginning. OSD/subtitles use the blending functionality of VDPAU which makes it quite easy to display something on top of the video.

      Comment


      • #23
        Originally posted by greg View Post
        Yes, but you can't blame the renderer/video output. Adding noise etc. is usually done with video filters, and traditionally these run on the CPU. For video decoding acceleration this means you'll have to do a costly gpu -> host -> gpu round trip with the decoded frames, and currently MPlayer doesn't support that. If you use software decoding, you can do as much video filtering as you want, even with the VDPAU renderer, of course. In fact the VDPAU renderer can *de*noise, and blur/sharpen in the postprocessing, unfortunately it can't add any noise.
        True...It would indeed be a bad thing, however with compositing there seems to be a stage whereby things are going through the cpu anyway, but this can probably be attributed to x and not vdpau specifically. When watching videos 1080p with compositing and vdpau enabled I get something like 20~30% (mostly used by xorg) usage at lower p states 800Mhz~1.2Ghz without it I get less than 10%.

        I wonder how video is done with compositors in windows and osx as they don't seem to waste cpu cycles with when videos are running.

        Comment


        • #24
          The Win and MacOS graphics stacks were designed around compositing from day one, so it's not hard to keep all the video buffers in VRAM.

          In the X/DRI world compositors are optional extras, implemented using standard "application" APIs which don't know about buffers in VRAM.

          Most of the work right now is building the lower level code for the "new stack", but I imagine compositor integration will be relatively high on the priority list after KMS/DRI2/Gallium become broadly available. In the meantime, Wayland will probably offer a good example of what can be done if the compositor is fully integrated into the stack.
          Last edited by bridgman; 20 September 2009, 07:43 PM.
          Test signature

          Comment


          • #25
            Originally posted by Jimmy View Post
            Now if we could only get this into Flash since the world of web TV is bent on cramming it into flash (and that fucking Move Media Player plug-in... DAMN YOU ABC!!). I know, I'm a dreamer.
            You are not a dreamer. This is working under Linux on AMD and NVIDIA platforms and possibly the Intel Moorestown platform. I am preparing the packages. Of course, this uses VA API. ;-)

            Note: this only accelerates Flash HD (H.264) videos, at VLD level.

            Comment


            • #26
              Originally posted by gbeauche View Post
              This is working under Linux on AMD [...] platforms
              Oh, come on. No it isn't working until AMD opens XvBA.

              Of course, this uses VA API. ;-)
              Just curious: I think it'd be nice to have *one* API for video decode acceleration instead of multiple ones on Unix and in my opinion, VDPAU is the better choice. First simply because it's a nice API and well-documented, second because there's a lot of application support already and third because it offers some functionality that is missing in VA-API.
              Why are you advocating VA-API, what it makes it more suitable for this task in your opinion?

              I'm getting the impression it's all just a NIH problem...

              Comment


              • #27
                Originally posted by greg View Post
                First simply because it's a nice API and well-documented, second because there's a lot of application support already and third because it offers some functionality that is missing in VA-API.
                What functionality do you think is missing in VA API?

                Why are you advocating VA-API, what it makes it more suitable for this task in your opinion?
                You seem to confuse API and implementation. There are more than 6 VA drivers available now. VDPAU does not have that many implementations. Sure, there are probably more players supporting VDPAU but if they don't cover many user base because their GPU or VPU is not supported, that won't be very useful.

                Comment


                • #28
                  Originally posted by gbeauche View Post
                  What functionality do you think is missing in VA API?
                  Post-processing, especially deinterlacing better than bob; blending in VA-API is not very powerful.

                  You seem to confuse API and implementation. There are more than 6 VA drivers available now. VDPAU does not have that many implementations.
                  No, I'm not. I'm aware that only NVidia and S3 implement VDPAU at the moment, but where do you get these 6 VA-API implementations from? As far as I know, only Intel and S3 have native implementations.

                  Sure, there are probably more players supporting VDPAU but if they don't cover many user base because their GPU or VPU is not supported, that won't be very useful.
                  That's also why I think wrapping to VDPAU instead of VA-API might a better idea.

                  Comment


                  • #29
                    Originally posted by greg View Post
                    Post-processing, especially deinterlacing better than bob; blending in VA-API is not very powerful.
                    Deinterlacing is indeed missing from the API, but this can be added. Why is blending not very powerful? BTW, it is now even possible to use OpenGL, so I hope this one is powerful enough in blending for your taste.

                    No, I'm not. I'm aware that only NVidia and S3 implement VDPAU at the moment, but where do you get these 6 VA-API implementations from? As far as I know, only Intel and S3 have native implementations.
                    Actually, I even forgot to count a few other implementations but those were not announced yet. Native implementation or not does not really matter since the goal is to support the underlying chip. In no particular order: Intel chips (US15W and G45 for 3 drivers, more to come), S3 chips, NVIDIA chips, AMD chips. I think this covers most what users have.

                    Comment


                    • #30
                      Originally posted by gbeauche View Post
                      Deinterlacing is indeed missing from the API, but this can be added. Why is blending not very powerful?
                      It's only possible to use a fixed blending equation (or does it even support chroma-keying only? I'm not sure... would be great if the documentation sucked less... seriously). OpenGL of course is powerful indeed, but it also complicates things a lot.

                      Well, are you going to add deinterlacing to the API? Sure, this could be done with OpenGL as well, but then every application would probably reimplement this again.

                      Native implementation or not does not really matter since the goal is to support the underlying chip.
                      If that is what you care about I have to ask again why you settled for VA-API, since a API that is not used anywhere isn't helpful and application support for VA-API is barely existing. VDPAU pretty much is the only video decode API relevant in practice at the moment.

                      Comment

                      Working...
                      X