Announcement

Collapse
No announcement yet.

Work-In-Progress Porting Of GCN 1.0/1.1 UVD To AMDGPU DRM Driver

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by AndyChow View Post
    I don't even get the point of things like UVD. It never works and you're better off setting to software decode. IMO they should remove it and save themselves the mm². Or replace it with a quicksync equivalent. And not VCE, VCE isn't even faster than CPU encoding, and terrible quality.

    Maybe I'm missing something.
    If you use Chrome or Firefox you can enable the H264ify Plugin... makes most videos streamed on Youtube accelerated if there is an h264 stream of it available. It certainly works on my Lenovo x130e netbook...

    Comment


    • #12
      I'm confused. I've been using the AMDGPU kernel driver with my R7 260X (CIK) for a while now (a year?) and hardware-accelerated video playback has appeared to be working since I switched. The kernel log mentions UVD a few times and vdpauinfo shows all the correct information. Video playback with mplayer/mpv shows vdpau is being used. What am I missing?

      Comment


      • #13
        Originally posted by Imroy View Post
        I'm confused. I've been using the AMDGPU kernel driver with my R7 260X (CIK) for a while now (a year?) and hardware-accelerated video playback has appeared to be working since I switched. The kernel log mentions UVD a few times and vdpauinfo shows all the correct information. Video playback with mplayer/mpv shows vdpau is being used. What am I missing?
        Nothing. Michael, this is only about SI, amdgpu already supports UVD & VCE with CIK.

        Comment


        • #14
          Originally posted by DanL View Post
          Yeah, but who the heck wants to do that?
          As said above, this can be automated easily. We have packages that do far more complex shit during installation.

          Comment


          • #15
            Originally posted by starshipeleven View Post
            As said above, this can be automated easily. We have packages that do far more complex shit during installation.
            It's still extra effort, and I still hope if/when AMD switches to using amdgpu as default with GCN1.0/1.1, linux-firmware will switch to the version with header.
            (What I didn't realize is that adding the header would prevent it from working with radeon.)

            Comment


            • #16
              Originally posted by AndyChow View Post
              I don't even get the point of things like UVD. It never works and you're better off setting to software decode.
              Works for me...
              The point is that it's more efficient to decode with dedicated hardware.

              Comment


              • #17
                Originally posted by AndyChow View Post
                I don't even get the point of things like UVD. It never works and you're better off setting to software decode. IMO they should remove it and save themselves the mm². Or replace it with a quicksync equivalent. And not VCE, VCE isn't even faster than CPU encoding, and terrible quality.

                Maybe I'm missing something.
                Remember, the die isn't specifically designed for the 3% linux market share. It works fine and stable in other operating systems, with official drivers. The majority of the customers.

                Comment


                • #18
                  Originally posted by DanL View Post

                  Works for me...
                  The point is that it's more efficient to decode with dedicated hardware.
                  It might work, but what's the point? Does it actually reduce power consumption irl or anything? If it decreases your CPU usage from 20% to 10%, but increases your GPU frequency from 300 Mhz to 1250 Mhz, what's the point? It might actually increase overall power consumption.

                  If they removed that asic, that space could be used for more compute units, or decreased size of silicon.

                  Comment


                  • #19
                    Originally posted by AndyChow View Post

                    It might work, but what's the point? Does it actually reduce power consumption irl or anything? If it decreases your CPU usage from 20% to 10%, but increases your GPU frequency from 300 Mhz to 1250 Mhz, what's the point? It might actually increase overall power consumption.

                    If they removed that asic, that space could be used for more compute units, or decreased size of silicon.
                    Try dropping CPU use from 50% to 3-5%; that's 20 to 40W saving. Another use case is a media PC, mini PC or cheap laptop where the CPU is a low-frequency Excavator that just can't handle it on CPU alone.

                    Comment


                    • #20
                      Originally posted by Peter Fodrek View Post
                      One more resason to merge DAL/DC/Display Code as soon as possible
                      Why? How does that relate?

                      Comment

                      Working...
                      X