Announcement

Collapse
No announcement yet.

Gallium3D VDPAU On Radeon Starts Working

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    This is pretty cool, but I still doubt the usefulness of this. Power consumption aspects aside, if your GPU is powerful enough for accelerating HD playback, it's likely your CPU will be powerful enough to do it alone. Also keep in mind that with Gallium's partial acceleration, a fair bit of CPU power is still needed, especially with H.264.

    Comment


    • #22
      Originally posted by brent View Post
      This is pretty cool, but I still doubt the usefulness of this. Power consumption aspects aside, if your GPU is powerful enough for accelerating HD playback, it's likely your CPU will be powerful enough to do it alone.
      You don't need a powerful GPU to accelerate HD video. You can get a cheap one for 50 bucks that does it. And then it's likely that your CPU is also on the cheap side, and therefore no good for HD video. Which is the whole point. You spend 50 bucks (GPU) to be able to play HD video, instead of 200 bucks (CPU) that could do it.

      Comment


      • #23
        It's easy to decode video with the ASIC (if you know how to), but that is not what is being used here. The Gallium implementation uses shaders, this is less efficient and it's not possible to offload the complete decode pipe. I'm sceptical, I doubt this is going to work well on low-end GPUs.

        Moreover, even the cheapest CPUs you can get nowadays (save for some of the remaining single-core surplus and netbook CPUs) are capable of decoding HD video at bluray bitrates just fine. You don't need to spend 200 bucks.

        Comment


        • #24
          Originally posted by brent View Post
          It's easy to decode video with the ASIC (if you know how to), but that is not what is being used here. The Gallium implementation uses shaders, this is less efficient and it's not possible to offload the complete decode pipe. I'm sceptical, I doubt this is going to work well on low-end GPUs.
          That doesn't sound too good :-/

          Moreover, even the cheapest CPUs you can get nowadays (save for some of the remaining single-core surplus and netbook CPUs) are capable of decoding HD video at bluray bitrates just fine. You don't need to spend 200 bucks.
          Currently not though. Distros don't ship multithreaded players. That stuff is still locked inside experimental Git and SVN branches. And with only one core, not even a Core I7 Extreme can decode high bitrate H.264.

          Comment


          • #25
            Before we had UVD (r1xx-r5xx), we used the 3D engine for video decode, so it's definitely viable.

            Comment


            • #26
              Originally posted by RealNC View Post
              Currently not though. Distros don't ship multithreaded players. That stuff is still locked inside experimental Git and SVN branches. And with only one core, not even a Core I7 Extreme can decode high bitrate H.264.
              H.264 MT is mainline in ffmpeg and will be merged very soon in libav (partly merged already).

              Comment


              • #27
                Originally posted by brent View Post
                It's easy to decode video with the ASIC (if you know how to), but that is not what is being used here. The Gallium implementation uses shaders, this is less efficient and it's not possible to offload the complete decode pipe. I'm sceptical, I doubt this is going to work well on low-end GPUs.

                Moreover, even the cheapest CPUs you can get nowadays (save for some of the remaining single-core surplus and netbook CPUs) are capable of decoding HD video at bluray bitrates just fine. You don't need to spend 200 bucks.
                I think the most interesting area for video decode accel is with that hardware that doesn't have a capable CPU - particularly HTPC systems. Low power CPUs have the benefit of not generating as much heat, so less fans and cooling required, and a quieter device.
                Granted, being able to use the proper decoding hardware of the video card would be preferred, but sadly that's the domain of blobs.

                Comment


                • #28
                  Originally posted by mirv View Post
                  Granted, being able to use the proper decoding hardware of the video card would be preferred, but sadly that's the domain of blobs.
                  For now. Let's hope the AMD guys find a way around it soon.

                  Actually, an (un)official update from bridgman would be appreciated. Last time he mentioned this was in December and he said something about 6 months.

                  Comment


                  • #29
                    Originally posted by mirv View Post
                    I think the most interesting area for video decode accel is with that hardware that doesn't have a capable CPU - particularly HTPC systems. Low power CPUs have the benefit of not generating as much heat, so less fans and cooling required, and a quieter device.
                    Nah, you won't save any power with this. GPUs are rather power-hungry and the shader-based approach taxes both CPU and GPU at the same time. This will be far less efficient than CPU decoding.

                    Comment


                    • #30
                      Originally posted by agd5f View Post
                      Before we had UVD (r1xx-r5xx), we used the 3D engine for video decode, so it's definitely viable.
                      I think the 3D engine is still used, at least I have read that the Radeon 6770 uses shaders for MVC decode and the Xenos GPU also uses shaders for HD playback (although I am not sure if Xenos has UVD).

                      Comment

                      Working...
                      X