Announcement

Collapse
No announcement yet.

VDPAU with the Radeon driver?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Its better to see UVD as a separated chip incorporated on the same video processor GPU, it can decode a set of formats, h264, mpeg2, vc-1. But if new formats appears the chip wouldn't help. The real strength of this chip is that it consumes 0,5-2w and your cpu and gpu load is near 0%.

    A gallium3d implementation is more like a normal program working with your gpu shaders. If it is implemented and maintained new formats should be "easy" to incorporate.

    Yeah there are rumors pointing to a gallium3d state tracker for this, but it seems that at the moment nobody is working at it, so it seems that we have to wait a little.

    Comment


    • #12

      somebody already did some accelerated video over gallium3d.

      Comment


      • #13
        Is it continued?

        I mainly look for info on mesa mailing list, and seems that video accel is not being worked (at the moment).

        A gallium 3d implementation obviously will consume more than 2W and will load your cpu and gpu a little, mainly gpu. One advantatge is that once implemented any card with shaders (all current 3d cards) and with a gallium driver could use it.

        Implement VDPAU is not the problem, VDPAU is only a "presentation" of functions, the real problem is implement the program that uses shaders to decode video.

        Comment


        • #14
          Originally posted by Jimbo View Post
          Is it continued?

          I mainly look for info on mesa mailing list, and seems that video accel is not being worked (at the moment).

          A gallium 3d implementation obviously will consume more than 2W and will load your cpu and gpu a little, mainly gpu. One advantatge is that once implemented any card with shaders (all current 3d cards) and with a gallium driver could use it.

          Implement VDPAU is not the problem, VDPAU is only a "presentation" of functions, the real problem is implement the program that uses shaders to decode video.
          But why can't those implementations use UVD when available (and nvidia's/intel's equivalent when available) since then in the "slowest" case it uses shaders

          Comment


          • #15
            The story:

            Both recent nVidia cards and recent AMD cards have specialised hardware for decoding video. nVidia is providing access to this in their binary driver through the VDPAU API, and it's working well. ATi is providing access to this in their binary driver through the VA-API (like Intel is), and it kind of works, sometimes.

            The use of this specialised hardware in open drivers is a big question mark. Because of the Hollywood studio lobby. It is very likely that this will never be opened and that no open source driver will ever do this. We don't know yet, but it's quite likely.

            The good news is that you can accelerate much of video decoding on a GPU using shaders, which are essentially programs running on your GPU. There is enough information on how to do this on radeons, and somebody seems to have done something over the Gallium3D infrastructure, but it's far from ready yet. This will probably be implemented eventually.

            Running decoding through shaders will not bring as much powersaving as using the dedicated onboard chip, but it should increase the performance significantly. It is also adaptable to new file formats.

            So, using the onboard specialised chip (UVD) is not likely, but doing decoding with shaders is likely to appear in the future for all cards supported by the Gallium3D framework. It's not quite as good in some ways, but it should be good enough for most.

            Comment


            • #16
              Also there is some hope that if AMD manages to separate UVD and DRM enough in future chips that they might be able to release UVD documentation for those chips. Note that this doesn't affect current chips. And it's also a very big if.

              Comment


              • #17
                Originally posted by markg85 View Post
                But why can't those implementations use UVD when available (and nvidia's/intel's equivalent when available) since then in the "slowest" case it uses shaders
                Basically, patent problems , other companies involved ... it seems that uvd is not going to be open.

                Anyway I think that it is a very good idea to use shadders to video decoding, they will not reach 2w, but I believe that it should be competitive too. Shadders are very powerfull!! lot of scientist use them to run math simulations for example. Plus is a *general* solution you will not depend on other chips.

                Comment


                • #18
                  Originally posted by Jimbo View Post
                  Basically, patent problems
                  It's not so much about patents than copy protection inside the hardware sold to us. It's not used for anything on Linux side but on Windows side for - unless I've understood completely wrong - for eg Bluray movie playback so that the content is unencrypted at least minute by the hardware functionality in the card itself and you can't get in between trivially. Now, we don't really care that much of that copy protection anyway but the problem is that the UVD hardware and that copy protection are too tightly integrated into each other in current hardware and AMD hasn't been able to give documentation on one without revealing critical bits of the other. And they really cannot afford to reveal anything of the digital rights management functionality without serious issues for them on Windows side.
                  With future chips this might be different but that's irrelevant with the situation at hand. For now shader-based decoding is the best we're going to have.

                  Comment


                  • #19
                    Here is an item nobody is talking about: reverse engineering.

                    Why is this technique not used in conjunction with UVD?
                    If it is possible for a whole GPU (nouveau) why shouldn't
                    it be possible for UVD?

                    Comment


                    • #20
                      Originally posted by Rabauke View Post
                      Here is an item nobody is talking about: reverse engineering.

                      Why is this technique not used in conjunction with UVD?
                      If it is possible for a whole GPU (nouveau) why shouldn't
                      it be possible for UVD?
                      The article i just read on THIS SITE about UVD mentioned that it would probably be not much of a challenge to reverse engineer it (so easy?!)

                      Someone just have to give it a shot...
                      On the other hand it's probably not usable in the near future for my card (5770)... perhaps for those r300 guys (if that has UVD)

                      Comment

                      Working...
                      X