Announcement

Collapse
No announcement yet.

AMD Continues Updating Its R500 Documentation

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by crispy View Post
    Unfortunately its not readily available in Denmark yet.. And I also wanted kickass 3D performance!
    You will be waiting for quite a while then.

    Comment


    • #12
      Originally posted by rohcQaH View Post
      And, of course, you can neither comment on future products nor on DRM issues, so we won't know whether you succeeded until some UVD14-docs start appearing on amd.com, right?
      Pretty much. We are starting to look into the implications of exposing more information about existing UVD implementations, but it's a very time-consuming process and I obviously don't know what the outcome will be until we finish.
      Test signature

      Comment


      • #13
        Windows' marketshare keep declining at an ever growin pace and Apple is going to seriously harm the PC manufacturors. See dual AMD+Intel CPU Macbooks comming? 'Cus I don't.

        On the long run DRM OS's and hardware will die and Linux will take over of what is going to be left of the PC market.

        So whether you like it or not, AMD, if ypu want to be part of the future then you will have to seperate at some point, or move it to a shader software based solution.

        But as long as you keep releasing docs it's AMD all the way for me.

        Comment


        • #14
          Well, I think I dont understand how DRM and real time video deconding is related.

          IMHO, when someone wants to pirate a BD for example, they just rip the video directly from the disk without any (real time) decoding taking place.

          Comment


          • #15
            There are two largely distinct connections :

            1. In general, we need to make sure that our hardware/software does not become the easiest way (or one of the easier ways) to capture and duplicate protected content.

            2. In order to sell our products into most markets we need to comply with agreements which require a certain degree of "robustness" in our decode paths to ensure that protected video content can not be intercepted while being processed in our hardware/software stack, which implies that protected video content remain protected (ie encrypted) all through the stack. This requirement is independent of (1), ie even if there are much easier ways to pirate the content (eg ripping directly off a BD disk) that does not have any impact on our obligations or on the consequences of not meeting those requirements.

            That sucks, but it's the price that all hardware vendors pay in order to be able to ship "legal" BD playback solutions, which in turn are a pre-requisite for selling into most new PC markets.
            Test signature

            Comment


            • #16
              I do so much miss the ability to edit

              The blurb above explains why DRM and decode have to be tightly connected, but it's obviously not the whole story.

              In principle there is no reason why the two blocks can't be designed so that the programming functions for decode and DRM are completely separated, allowing programing info for decode to be released for use in open source drivers without putting the DRM implementation in other drivers at risk. In practice, however, separating the blocks tends to incur penalties, either in terms of larger die size and product cost, or in terms of more complex programming and higher CPU utilization.

              Historically the degree of programming interdependence has been "luck of the draw", since the ability to use the hardware in an open source driver was not one of the requirements fed into the front of the design process. The good news is that we are now including open source support as a consideration - the bad news is that things like decoding blocks are not redesigned very often, just tweaked, and separating the two functions requires more than just a tweak.
              Test signature

              Comment


              • #17
                @Bridgeman,

                How severe is the impact of DRM hindering the release of documentation? And now that FLOSS is taken into considderation on the drawing board, how do you see this impact changin on the horizon? Do you think it will reach a point of being unmanageable any time soon?

                I am not asking for the technical details, although I would like to know of course, but a simple bad, not bad, good or no problem will do

                Comment


                • #18
                  In the specific area of UVD, the impact of DRM is still a problem and I expect it to remain that way for quite a while. There's a chance we may be able to open up the current implementations but I'm still telling everyone to assume that will *not* happen for now.

                  For the rest of the GPU, I guess it averages out to "no problem". It's not getting any easier and (so far) I'm not seeing any hard roadblocks, in the sense that the technical complexity is going up every year but so far we're dealing with it.

                  I wish the new products didn't come along as quickly though
                  Test signature

                  Comment


                  • #19
                    Originally posted by chaos386 View Post
                    I may be an evergreen owner, but I'm glad to see that the older cards are still getting some love. It's nice to know I'll still have support when my card is past its prime.
                    2012, the year we make contact (with an actual driver combo that covers Evergreen)???

                    Comment


                    • #20
                      Only if it takes you a couple of years to download it

                      The kernel driver code is in 2.6.35, and Richard is getting close to having enough Mesa bits working to justify an initial code push. Normally we push out 2D acceleration first which means Alex ends up having to hand-assemble all the shaders with big honkin' macros; this time we're going to try doing things the other way round - getting 3D running first then using the shader compiler from that to speed up the 2D work.
                      Test signature

                      Comment

                      Working...
                      X