Announcement

Collapse
No announcement yet.

AMD Packs In More AMDGPU Features For Linux 4.15

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by bridgman View Post

    How can a HW <-> OS driver be OS-agnostic ?
    I mean the part that's in the hardware + firmware. That should be OS agnostic. The software part of that hybrid approach should be OS specific, sure. I understood that something is blocking its usage on the hardware / firmware side of things (I've read something about waiting for firmware to enable something and so on). May be that's incorrect and it's just missing software implementation? If that's the case, that's better I suppose.
    Last edited by shmerl; 10-09-2017, 05:07 PM.

    Comment


    • #12
      Originally posted by shmerl View Post
      I understood that something is blocking its usage on the hardware / firmware side of things (I've read something about waiting for firmware to enable something and so on).
      Ahh, OK. I had not heard anything about the HW or microcode being available on Windows but not Linux so that interpretation never occurred to me. Thanks for explaining.

      There have sometimes been licensing issues in the past (eg. MS pays license fees per-copy-of-Windows and complies with license requirements by being closed source) but I haven't heard anything like that recently.

      Comment


      • #13
        Originally posted by shmerl View Post

        I mean the part that's in the hardware + firmware. That should be OS agnostic. The software part of that hybrid approach should be OS specific, sure. I understood that something is blocking its usage on the hardware / firmware side of things (I've read something about waiting for firmware to enable something and so on). May be that's incorrect and it's just missing software implementation? If that's the case, that's better I suppose.
        There's software implementations and software implementations. If you think of something like OpenGL, the entire point of the thing is to make something that renders on a GPU, yet many implementations are CPU-bound and the drivers are OS-dependent. It's not as if there was some fixed-function hardware you could just tell "make it so" and get your output. It's same story with encoding/decoding without actual hardware support: you might be able to get the GPU to do what you want but it's lots of work and won't be as efficient as actual hardware support anyway

        Comment


        • #14
          Originally posted by Brisse View Post
          I know it says so on Wikipedia ( https://en.wikipedia.org/wiki/Unifie..._Decoder#UVD_6 ) but I'm pretty certain it's wrong. It doesn't have VP9 hardware decoding. They cheated a bit and implemented a GPGPU/CPU based hybrid decoding for it, but that's on Windows only, and I wouldn't really call it proper hardware decoding.
          Oh, well, that would be cheating, indeed. Cause then it would be done in SW on CPU/Shaders. :/
          But then, at least, it should be possible to write something like this as SW for "Linux" / userland, too and hook that up to VDPAU/VAAPI/Openmax/... so programs might be able to use it, even if parts of it require some more CPU power. Half-accelerated is still better than none.

          Originally posted by shmerl View Post
          Why is it on Windows only? It should be OS agnostic. What prevents it from being used on Linux?
          I guess then it currently is implemented in the blob driver, only. Cause according to Brisse it's done partly in SW. I think the early Radeons (without UVD hardware) also advertised video acceleration but maybe it was done partially on CPU, partially on the GPU shaders?


          Originally posted by agd5f View Post
          H.265 encode is implemented as part of the UVD engine rather than VCE. The two engines are very similar. On Raven, the two separate engines (UVD and VCE) are replaced by a single engine (VCN).
          Interesting. Thanks a lot. I just suspected, because they were different blocks and it was only de-code in the past, that there might still be some fundamental difference in HW for the two operations - and it wasn't possible to "kind of use UVD backwards to encode".

          It's really neat to have so many devs here so you get some insights.
          Stop TCPA, stupid software patents and corrupt politicians!

          Comment


          • #15
            Originally posted by Adarion View Post
            Oh, well, that would be cheating, indeed. Cause then it would be done in SW on CPU/Shaders. :/
            But then, at least, it should be possible to write something like this as SW for "Linux" / userland, too and hook that up to VDPAU/VAAPI/Openmax/... so programs might be able to use it, even if parts of it require some more CPU power. Half-accelerated is still better than none.
            Video decode/encode do not parallelize very well. There are certain stages that map well to shaders, but the whole pipeline does not. Switching between CPU and GPU usually involves a lot of synchronization which is not good for performance. I'm not too familiar with VP9, but for other codecs, it's usually not much of a win compared to the CPU. That's why most GPUs have dedicated fixed function decode/encode hw.

            Comment


            • #16
              Originally posted by Brisse View Post

              I know it says so on Wikipedia ( https://en.wikipedia.org/wiki/Unifie..._Decoder#UVD_6 ) but I'm pretty certain it's wrong. It doesn't have VP9 hardware decoding. They cheated a bit and implemented a GPGPU/CPU based hybrid decoding for it, but that's on Windows only, and I wouldn't really call it proper hardware decoding.
              I wonder why there's need to have "hardware decoding/encoding" in the current form. Don't they really have a binary blob running in a specialized hardware? What about making that hardware become fully programmable so projects like FFMpeg can be built for that hardware instead relying on what they provide in their GPUs?

              What about Khronos Group's OpenMAX?

              Comment


              • #17
                Originally posted by timofonic View Post

                I wonder why there's need to have "hardware decoding/encoding" in the current form. Don't they really have a binary blob running in a specialized hardware? What about making that hardware become fully programmable so projects like FFMpeg can be built for that hardware instead relying on what they provide in their GPUs?
                Video decode/encode hw is fixed function. It's designed do specific operation for specific codec. If you change the codec, the hw no longer fits. If you make a generic solution you end up with a CPU or GPU, neither of which by itself it a good fit for video. Even on GPUs there are some parts of the 3D pipeline that do not map will to shaders so they are implemented as fixed function blocks. Some new graphics API features require newer fixed function blocks so it's not completely programmable.

                You could build a similar mixed programmable and fixed function pipeline for video, but then you are still limited by the fixed function blocks and having a big pipeline would use more power and die area to support a relatively small number of states (as compared to graphics).

                Originally posted by timofonic View Post
                What about Khronos Group's OpenMAX?
                It's a media framework similar to VDPAU or VA-API.

                Comment


                • #18
                  Originally posted by agd5f View Post
                  Video decode/encode hw is fixed function. It's designed do specific operation for specific codec. If you change the codec, the hw no longer fits. If you make a generic solution you end up with a CPU or GPU, neither of which by itself it a good fit for video. Even on GPUs there are some parts of the 3D pipeline that do not map will to shaders so they are implemented as fixed function blocks. Some new graphics API features require newer fixed function blocks so it's not completely programmable.

                  You could build a similar mixed programmable and fixed function pipeline for video, but then you are still limited by the fixed function blocks and having a big pipeline would use more power and die area to support a relatively small number of states (as compared to graphics).
                  It's okay then, it makes sense. It's some kind of sad to still need need fixed function hw for video encoding/decoding these days, but hardware seems to have limits to achieve the reconfigurable computing dream in low power ways.

                  Originally posted by agd5f View Post
                  It's a media framework similar to VDPAU or VA-API.
                  Yes, but OpenMax is a standard. VDPAU, VA-API, cuvid/nvdecode don't.

                  So what about ditching that mess, use OpenMax and participate in the development of it to make it better for desktop systems too?

                  Comment


                  • #19
                    Originally posted by timofonic View Post
                    Yes, but OpenMax is a standard. VDPAU, VA-API, cuvid/nvdecode don't.

                    So what about ditching that mess, use OpenMax and participate in the development of it to make it better for desktop systems too?
                    They are all standards. We support OpenMAX via gstreamer already in mesa for radeon hw, but most applications use VDPAU and VAAPI.

                    Comment


                    • #20
                      Originally posted by agd5f View Post

                      They are all standards. We support OpenMAX via gstreamer already in mesa for radeon hw, but most applications use VDPAU and VAAPI.
                      GStreamer? Please not, that's Gnome software. It requires glib...

                      Comment

                      Working...
                      X