Announcement

Collapse
No announcement yet.

A NVIDIA VDPAU Back-End For Intel's VA-API

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • A NVIDIA VDPAU Back-End For Intel's VA-API

    Phoronix: A NVIDIA VDPAU Back-End For Intel's VA-API

    Just over a month ago we shared that patches had emerged to support Intel's VA-API in MPlayer and FFmpeg. VA-API supports popular video formats such as MPEG-4 and VC-1 and is able to accelerate IDCT, Motion Compensation, LVC, bit-stream processing, and other functions, but this video API has not picked up much speed yet. The only display driver to have implemented support for VA-API in the hardware is Intel's closed-source driver (the one that's a bloody mess) for the Poulsbo chipset, which is found in a few select netbooks/nettops. However, it is now possible to use Intel's VA-API with NVIDIA hardware (the GeForce 8 series and later) and soon will be possible to use this video API on ATI/AMD hardware too.

    http://www.phoronix.com/vr.php?view=13460

  • #2
    Well, I am just hoping Nvidia will make some changes to their driver so cards with only 128MB VRam + turbocache can play HD video.
    My Quadro FX 570M (128MB) can play non-HD videos without a problem, but it always gives error 23 (insufficient vram) when I try to play video with 720p or higher resolutions. nvidia-settings does show that this card has 512MB vram(128MB onboard vram + 384MB turbocache, i guess), but it seems the driver doesn't make use of the turbocache.
    Last edited by dickeywang; 02-03-2009, 06:02 AM.

    Comment


    • #3
      128MB is more than plenty for all kinds of video. I guess there's some other problem and this is just a misleading error message.

      Comment


      • #4
        Maybe compiz need lots of vram too the same time.

        Comment


        • #5
          128MB isn't really that much when dealing with all of the reference frames you need to maintain for some H264 streams. For example, an unrestricted (i.e. not Level 4.1) 1920x1088 stream with 15 reference frames would use over 60MB just for the reference surfaces (and that's assuming 4:2:2 YUV surfaces -- if they are stored in RGB, it's over 90MB).

          What I'm wondering is if VDPAU and/or VA-API support accelerating discrete steps in the video decoding process (rather than full bitstream processing), so that they can be adapted to accelerate Theora and VC-1 on non-VC-1-capable GPUs (plus even my 3GHz dual core can't play a 1080p Theora stream while it can play most 1080p H264 streams).

          Comment


          • #6
            Do I understand it correctly that Nvidia is now actively developing and providing support for all three HD video acceleration interfaces (VDPAU, VA-API & XvBA?), even their competitor's, whereas the fglrx team is mainly farting around trying to fix one more bug than they introduce in each release?

            bridgman's support and involvement around here and the potential of the Free radeon drivers are really the only reasons why a sane person should at all consider getting an ATI card, or perhaps one needs to be a little insane to go ATI at this point by the looks of it. I dunno anymore.

            Comment


            • #7
              Don't think so. This is a third party developer layering VA-API over other video APIs so their higher level code only needs to support a single API. Nothing to do with NVidia.

              Comment


              • #8
                Originally posted by korpenkraxar View Post
                Do I understand it correctly that Nvidia is now actively developing and providing support for all three HD video acceleration interfaces (VDPAU, VA-API & XvBA?), even their competitor's, whereas the fglrx team is mainly farting around trying to fix one more bug than they introduce in each release?
                You got that totaly wrong. ATi (fglrx) only provides XvBA, nVidia (nvidia) only provides VDPAU and the FOSS-drivers (at some point will) provide only VA-API.

                This is realy just as bridgman said a developer (who probably isn't involved with any of these) who wrote/will write a backend that translates VA-API-calls into VDPAU-/XvBA-calls.

                Comment


                • #9
                  Originally posted by Zhick View Post
                  You got that totaly wrong. ATi (fglrx) only provides XvBA, nVidia (nvidia) only provides VDPAU and the FOSS-drivers (at some point will) provide only VA-API.
                  Aha, ok I see. My bad. So what is the difference from the user perspective provided that the back-end implementation works?

                  Comment


                  • #10
                    More player support.

                    Comment


                    • #11
                      Originally posted by korpenkraxar View Post
                      Aha, ok I see. My bad. So what is the difference from the user perspective provided that the back-end implementation works?
                      AFAIK it would mean that with the Galluim3D/Mesa based drivers and the open-source Intel and ATI drivers directly supporting VA-API and a sort of wrapper/layer/translator for fglrx and nvidia you would just need to implement VA-API support in your player to GPU enabled decoding on all (major) cards.

                      Maybe someone who has a fancy tag like AMD Linux Guy or X.org Dev could confirm is this might be a/the possible outcome.

                      Comment


                      • #12
                        Well unoffical specs for XvBA do not really help to get direct support. The wrapper is no bad idea, comes a bit late maybe, because vdpau implementations are already done and getting better so this wrapper is primary for fglrx. Using vdpau it is possible to have osd and subtitles displayed already (with mplayer+xinelib). I guess lots of work is done in the finetuning, so when the Intel implementaion works just a tiny bit differnet to the wrapper things can get worse than direct support. Did somebody try it with ati?

                        Comment


                        • #13
                          so does this mean I'll be able to play back h264 1080p using the Intel 4500M GPU in linux in the near immediate future? or should I just stick with nvidia GPU's?

                          Comment

                          Working...
                          X