Announcement

Collapse
No announcement yet.

Adobe's Flash Video Acceleration On Linux Uses VDPAU

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by mlau View Post
    This is why Windows is so much better: Microsoft provides ONE interface spec (DxVA) and driver and player software writers only have to worry about ONE interface (nevermind that DxVA isn't the perfect system).

    The current VDPAU/VAAPI/XvBA situation in the linux world is ridiculous: Just pick one and fix it up so that all parties are satisfied instead of having every vendor invent their own.

    Personally, I favor VDPAU: a lot of thought has been put into the overall
    design, and the developers are responsive to inquiries and suggestions.
    Quote for truth. Fragmentation of important API's like this are really harming linux when it comes to 3rd party software/driver support.

    Comment


    • #17
      Originally posted by bwat47 View Post
      Quote for truth. Fragmentation of important API's like this are really harming linux when it comes to 3rd party software/driver support.
      GOD I HATE THE 1 MINUTE EDIT LIMIT, anyway continues from my last post

      Flash really needs to have video accel support on cards outside of nvidia as well, compared to windows the performance is just abysmal to say the least. For example my laptop: 2ghz core2 duo, 4gb ram, 512mb hd2600. Flash absolutely rapes my cpu in linux, 480p youtube video eats ~50% cpu, in windows it uses like 10%. Also playing flash eats so much cpu my entire browser becomes less responsive.

      Comment


      • #18
        To deanjo:

        Adding to my last post, I also prefer to have OpenGL shader HW acceleration instead of libvdpau because these reasons:

        1- It'd be easier to allow direct HW decoding of some codecs which (currently) don't have support with libvdpau (such as VP8 or VP3 (for instance, lots of anime movies you watch use it )) using shaders... If you already know OpenGL (shading programming), it's easier to make it work.

        2- This is my major reason why I prefer to use OpenGL rather than libvdpau/libva. Both of them are libraries, so they also have the disadvantage of some overhead comparing to use OpenGL shaders directly on hardware... (For me, it's something like the case of using PulseAudio, when we just need for most cases the ALSA library to use our sound HW directly).

        That's my personal opinion, OC.
        Cheers!

        Comment


        • #19
          Originally posted by evolution View Post
          To deanjo:

          Adding to my last post, I also prefer to have OpenGL shader HW acceleration instead of libvdpau because these reasons:

          1- It'd be easier to allow direct HW decoding of some codecs which (currently) don't have support with libvdpau (such as VP8 or VP3 (for instance, lots of anime movies you watch use it )) using shaders... If you already know OpenGL (shading programming), it's easier to make it work.
          You still can use shaders with vdpau. Nothing is impeding that. VC-1 support for older nvidia cards that did not have 100% VC-1 decoding ability already shows that despite not having a 100% hardware solution playback is still accelerated.

          2- This is my major reason why I prefer to use OpenGL rather than libvdpau/libva. Both of them are libraries, so they also have the disadvantage of some overhead comparing to use OpenGL shaders directly on hardware... (For me, it's something like the case of using PulseAudio, when we just need for most cases the ALSA library to use our sound HW directly).
          vdpau and Pulse have nothing in common. One is a api the other is a sound server. The "overhead" is next to non-existant in vdpau. You would have near identical load utilizing a pure shaderbase over a vdpau-shaderbase decode.

          Comment


          • #20
            Just a couple of graphs here with Shaderbased playback vs vdpau. Test file was a 13.5 Mbit/s h.264 @ 720p on a 1090T based system. Keep in mind that one core @ 100% would show as 16.6% CPU usage on the graph.



            Comment


            • #21
              For clarity, where deanjo's last post referred to "shader based" I believe he was talking about shader based *render* acceleration (colour space conversion, scaling, etc...) while evolution was talking about shader based *decode* acceleration (codec-y stuff)...

              ... I think

              Comment


              • #22
                Originally posted by bridgman View Post
                For clarity, where deanjo's last post referred to "shader based" I believe he was talking about shader based *render* acceleration (colour space conversion, scaling, etc...) while evolution was talking about shader based *decode* acceleration (codec-y stuff)...

                ... I think
                Yes bridgman, that was what I was talking... Shader-based *decode* acceleration.
                Personally I think its cleaner to use it rather than something that had proprietary origins and you need to learn (some/most things) from scratch...

                Deanjo, thanks for you graphs, I read them, but they've not convinced me...
                I don't forget that current OpenGL support in Mesa drivers isn't as developed as it's on nVidia/ATI proprietary drivers.

                Cheers!

                Comment


                • #23
                  Originally posted by evolution View Post
                  I don't forget that current OpenGL support in Mesa drivers isn't as developed as it's on nVidia/ATI proprietary drivers.

                  Cheers!
                  And they more then likely never will be. You will be lucky if they ever reach even 70% of the performance found in a optimized stack.

                  Comment

                  Working...
                  X