Announcement

Collapse
No announcement yet.

AMD Releases Open-Source UVD Video Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • First, many thanks to twriter, bridgeman and other guys from AMD Tux dept. Quick graphics become a serious parameter on linux as native linux games become reality. This is a crutial argument for AMD head telling them where is their bussiness weakness. People on linux are a great potential for AMD GPU selling figures in the near future with free driver utilizing all GPUs' features and power potential.

    Comment


    • Originally posted by curaga View Post
      @agd5f

      I read that as "subtitles work, if the player has code for that". Correct?
      Christian would know better, but IIRC, VDPAU has a API for subtitles which the driver has to implement. I think Christian already implemented this in his earlier work on vdpau. That said it has nothing to do with UVD, it's handled in post processing.

      Originally posted by curaga View Post
      - is the new firmware compatible with old DRM? Ie, can the uvd-supporting firmware be shipped without any regression when run on older kernel+mesa?
      I'm not sure I understand the question. The kernel driver loads the microcode and sets up the UVD block. If userspace doesn't use the new functionality the kernel doesn't care. On older kernels the UVD microcode is never loaded and the newer RLC ucode works fine with older kernels.

      Originally posted by curaga View Post
      - it appears that on Windows, using UVD forces a certain core clock*. Does this limitation also apply on linux?
      It's not a limitation, it's a performance requirement. The clocks need to be high enough to provide appropriate memory bandwidth and graphics performance for a good video experience. In theory we should be doing the same thing, although we don't enforce it yet.
      Last edited by agd5f; 04 April 2013, 10:28 AM.

      Comment


      • the newer RLC ucode works fine with older kernels.
        This was the question. I saw the RLC blob had also changed, so had to check.

        It's not a limitation, it's a performance requirement. The clocks need to be high enough to provide appropriate memory bandwidth and graphics performance for a good video experience. In theory we should be doing the same thing, although we don't enforce it yet.
        The limitation was not about the forced clock being too high, it was about it being too low.

        Ie, you have both UVD video and something heavy running. On Windows, the UVD clock is forced, and it's not the highest state, which you want for that other activity.


        But thanks, that answers it - it's not a hw limit, but up to the driver which clock to set. So just please make sure to have the UVD clock be a minimum, not both a minimum and maximum.

        Comment


        • Originally posted by agd5f View Post
          It's not a limitation, it's a performance requirement. The clocks need to be high enough to provide appropriate memory bandwidth and graphics performance for a good video experience. In theory we should be doing the same thing, although we don't enforce it yet.
          High enough? Umm, as far as I read there was more like a downclock. Also what I read from the VGABIOS tables some years ago (there was some W32 tool where you could modify some presets with GPU and mem clocking for predefined states (was useful to clock down for DOS or BIOS-Setup operations with no powersaver driver)) video was normally relatively downclocked.
          I'm asking because I set my profile in Linux to "low" since dynamic PM doesn't seem to use it (only med and high). But static low worked nicely for me and gives me even 1W consumption less for the whole box (measured on the "wall side") than W32 with Catalyst.
          I had hoped I could keep my "low".
          Stop TCPA, stupid software patents and corrupt politicians!

          Comment


          • You might be seeing a bug. AFAIK the idea was that the UVD power state would be the lowest allowable when UVD was running, and that a higher state still might be selected if, say, a 3D-intensive app was running. Every hardware generation is a bit different though, so it's hard to generalize.
            Test signature

            Comment


            • Originally posted by twriter View Post
              We've tested internally on the equivalent embedded (non-consumer) parts so E series should work fine.

              Tim
              I've played with UVD on my E-350 APU and I'm quite happy with the results. Tested with mplayer (mplayer -vo vdpau -vc ffh264vdpau)
              and also with new vdpau xbmc code. 1080p movies are now playing fine.

              I also tested adobe flash plugin with
              OverrideGPUValidation = 1
              EnableLinuxHWVideoDecode=1
              option sset and had render *and* decode hardware accelerated! 1080p youtube was now playing nicely where before I was able to watch 480p only.

              vdpauinfo reports such capabilities:

              Decoder capabilities:

              name level macbs width height
              -------------------------------------------
              MPEG1 16 1048576 16384 16384
              MPEG2_SIMPLE 16 1048576 16384 16384
              MPEG2_MAIN 16 1048576 16384 16384
              H264_BASELINE 16 9216 2048 1152
              H264_MAIN 16 9216 2048 1152
              H264_HIGH 16 9216 2048 1152
              VC1_SIMPLE 16 9216 2048 1152
              VC1_MAIN 16 9216 2048 1152
              VC1_ADVANCED 16 9216 2048 1152
              MPEG4_PART2_SP 16 9216 2048 1152
              MPEG4_PART2_ASP 16 9216 2048 1152

              Comment


              • Originally posted by bridgman View Post
                Header files and hardware design are sufficiently different that I don't think you can apply a "header file" ruling to hardware.

                I don't understand why you can't see how programming specifications could reveal information about protected IP, whether it be copyright, patent or trade secret (or any of the other mechanisms which aren't usually relevent to open source driver support).
                Header files reflect the software interface to software libraries and other software ... they don't disclose the design nor the code of the libraries themselves. They merely allow other software to call the functions within the software library. Header files would reveal information such as "to call library function x, use entry point function_x_entry, and supply parameters y and z", function x will return result r.

                Header files are to software libraries as programming specifications are to hardware. Hardware programming specifications would reveal information such as "to enable feature x, write value y to register z". Hardware programming specifications do not reveal anything about the design of feature x or even register z implementation in the hardware itself.

                It is true that AMD's hardware design is AMD's IP and this IP is rightfully protected by AMD as a trade secret. That is all well and good, and entirely proper. No-one would expect anything else ... but no-one is asking for deatils about AMD's implementation of feature x in AMDs hardware designs, all that is asked for is the programming specifications. As you well know, AMD have released quite a number of programming specifications for 2D and 3D graphics acceleration features of their hardware designs in the past:



                These are programming specifications, they do not reveal AMD's hardware design for 2D and 3D graphics acceleration features. No-one has been able to use this information to make clones of AMD hardware. Don't misunderstand, the open source community is no doubt extremely grateful to AMD for releasing this information. In fact, it is precisely because AMD did release that information that many people who use open source software (myself included) have invested in AMD hardware rather than hardware from other manufacturers. It is AMD's support for open source that drove the purchase decision in the first place. To release this information is a positive for AMD, and it has not hurt AMD at all in terms of "giving away" AMD's IP.

                My question is not meant to criticise AMD at all, indeed I applaud AMD's open source efforts so far, but rather to enquire why this effort is incomplete. If AMD can release programming specifications for a large set of features within their hardware, with no harm resulting to AMD and no hardware design IP thereby revealed, and indeed positive purchase decisions have resulted from AMD's programming specifications so far, then what is the problem (I am asking) with the last bit ... to whit, the programming specifiactions for power management?

                To my mind, I can't see why just this bit is missing. Perhaps I am missing something, but to me it doesn't make any sense for AMD to just leave out the last bit.

                Thankyou for your reply insofar as it goes, but I'm afraid that it didn't really address the question I raise. Why the delay with the last piece of programming specifications? I just don't get it. Why would AMD do most of the job of providing programming specifications, discover that no harm comes to AMD as a result of doing so, indeed find that they have attracted a following because of these actions, but then just leave out the last bit (power management) ... possibly driving customers to Intel in the future in the process?

                I truly don't get it.
                Last edited by hal2k1; 04 April 2013, 09:46 PM.

                Comment


                • Seems to me that you are either suggesting that we made a decision to release "everything but power management", or maybe that everything we released so far was easy to do but for some reason we are applying a totally different standard to power management and delaying it as a result.

                  The reality is nothing of the sort. Pretty much everything we released has involved a lot of time and effort, and power management is just one more step. There's no "delay with the last piece"... it's going to end up taking about the same amount of time as other complex blocks we exposed.

                  I realize I may still not be answering your question but you are asking me to explain a situation that doesn't actually exist. It's like stopping the mailman while he's walking from the second-last house on the street to the last house and asking "why did you decide not to deliver mail to that last house".

                  Obviously the time scale is different (months rather than seconds) but that's the best analogy I can come up with.

                  If you're just asking "why don't you just release stuff rather than doing a lot of work and taking a lot of time ?" that's a different question and has nothing to do with power management.

                  Let me know if that's the real question and I'll either try to answer it again or (hopefully) find one of the posts where I answered it before
                  Last edited by bridgman; 04 April 2013, 11:31 PM.
                  Test signature

                  Comment


                  • I don't really think it was a time scale problem as much as it was an order of deliverance problem. Power management is much more important than some other features that were delivered earlier.I sticking with your analogy, Power managemt wasnt the last house on the street, it was the third house that got skipped over and now you're back tracking.

                    However I do also believe that it was a learning experience for all involved. I don't think AMD will be so naive about power management in the future. I'm convinced that AMD has learned from from all the backlash and will make sure newer hardware generations get power managemt in the order that it's importance deserves.

                    Comment


                    • What do you think the order should have been ? The main blocks we covered before UVD and power management were display and the 3D graphics engine -- I'm not sure which of those we could have delayed to let power management happen faster. Releasing updates to those blocks for newer hardware generally involved different resources, so proceeding with one did not significantly delay the other.

                      We had scheduled and were expecting power management to get approved before UVD, but in the end UVD took a bit less time than we expected and power management took a bit more time.

                      I guess we could have shifted focus from the 3D engine around the GL 1.5 - GL 2.0 stage and put more effort on getting the next round of power management started earlier, but I'm not sure that even today that people would agree that was the right decision, and it certainly didn't seem that way at the time.

                      Originally posted by duby229 View Post
                      I don't think AMD will be so naive about power management in the future. I'm convinced that AMD has learned from from all the backlash and will make sure newer hardware generations get power managemt in the order that it's importance deserves.
                      We're talking about exposing additional blocks in the GPU, which are shared across multiple generations of hardware. The first release of a block takes maybe 80% of the total effort, and that work only has to be re-done for newer hardware generations if the block and surrounding logic is totally redesigned.

                      The sequencing options we had were basically to delay new display or 3D graphics features, or delay UVD another couple of years to make sure it didn't interfere with power management. None of those seemed very attractive. Do you disagree ?

                      The option we *did* have (and decided at the time not to take) was improving the interim driver-based power management we have today. We did discuss that quite a bit a couple of years ago, but so far I haven't found a single developer who thinks that spending time on that would have been a good idea. There are probably some non-AMD developers who feel it would have been good for AMD developers to work on it, but the AMD developers didn't think it was a good idea either.
                      Last edited by bridgman; 05 April 2013, 12:45 AM.
                      Test signature

                      Comment

                      Working...
                      X