Announcement

Collapse
No announcement yet.

Radeon Driver Enables Full 2D Acceleration For HD 7000

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by entropy View Post
    Well, indeed, I'm not happy with the progress concerning this book/document.
    But I didn't intend to point at those people coming up with the great idea and now have other priorities.
    It's IMHO just unfortunate that it seems stalled.
    I do think documentation is going to be part of the solution to expanding the development community over time, although it won't show quick results.

    IMO the problem is that the docs keep getting big enough that they can't be maintained with the available people/time, and we have to keep cutting them back to the point where they can be kept current and "clean". Docco on "how each piece of code works" should be in the code, not a separate document -- the purpose of the top level doc needs to be more of an architectural introduction.

    At first glance the latest book does seem to have the right level of detail, which is really promising (although I guess some would say that's just cause it ain't finished yet ).

    Comment


    • #62
      Originally posted by pingufunkybeat View Post
      I wasn't trying to trivialise, I was just going by the information available to me, and it is understandable that I don't have deep understanding of AMD's internal processes, especially ones dealing with technical and legal review
      It's just that I've explained this so many times already (although not to you), but the trivial/wrong message continues to propagate a lot faster than the reality. Welcome to the Internet, I guess

      Comment


      • #63
        Originally posted by bridgman View Post
        I do think documentation is going to be part of the solution to expanding the development community over time, although it won't show quick results.

        IMO the problem is that the docs keep getting big enough that they can't be maintained with the available people/time, and we have to keep cutting them back to the point where they can be kept current and "clean". Docco on "how each piece of code works" should be in the code, not a separate document -- the purpose of the top level doc needs to be more of an architectural introduction.

        At first glance the latest book does seem to have the right level of detail, which is really promising (although I guess some would say that's just cause it ain't finished yet ).
        Well, I agree in large parts. The level of detail is perfect and this book is a very good read so far.
        But there are so many interesting chapter and section stubs...

        BTW, I remember you already answered on that subject.

        Originally posted by bridgman View Post
        There are conflicting views on the value of documentation though, even among the active developers. If you apply a classic triage model to the pool of potential developers and divide them into...

        - those who will learn enough by asking questions & reading code to work effectively without documentation
        - those who wouldn't be able to learn without documentation but would & will become effective developers if given a decent docco starting point
        - those who would read the documentation but still not be able to deal with the complexity of the hardware & code in the time they have available

        ... the big debate is over the size of the middle category, the ones who would be helped by documentation. If the work contributed by that middle group is more than the work it takes existing developers to write and maintain documentation, then the community is ahead. If not, then the community loses.

        The argument against a big push on documentation is that it would mostly help potential part-time developers, and the aggregate work available from the successful ones would be less than the time required to write & maintain the docco because of their part-time nature.

        IMO the solution is to find the right level of detail. I think everyone agrees that documenting where to find the code and how to build/install it is worth doing, so the question is whether documenting one or two more levels of detail would help and where we reach the point of diminishing returns. I suspect we reach that point quite early, ie that the optimal point would be maybe 5-10 pages *total* of well-written design docco divided across the major component projects (eg a page or two for stack-level plus a page or two for each major component) and kept ruthlessly up to date.

        That would be enough to let potential developers get a lot further into the code quickly, and hopefully enough that they can start tinkering and asking good questions (where "good" in this case means "other developers feel that answering the questions is a good use of their time").

        One of the obvious challenges would be how to avoid the documentation growing past the point where it can be maintained, since documents (and code) tend to grow during periods of enthusiasm then rot when the enthusiasm wanes.

        It would be nice if code and docco automatically got smaller when fewer people were available to work on them but nobody knows how to do that yet AFAIK
        Last edited by entropy; 01-03-2013, 12:11 PM.

        Comment


        • #64
          BTW, how do you measure community interest in features? Is there a public list/poll/etc somewhere?

          Comment


          • #65
            Originally posted by bridgman View Post
            Agree on UVD, and partially agree on PM (community could have done more but I understand why it didn't seem like a good use of limited time), but the discussion was about HD7xxx general support not PM/UVD and I think we're well past the point where AMD-supplied info is the primary gating factor for HD7xxx.

            We did explicitly exclude UVD at the start of the program, but also said we would work on finding ways to release some support and we are continuing to make progress on that. I really wish people would stop trivializing the situation though... the fact we started writing code internally doesn't mean that is the code we can release, but the process can't really even *start* until we have working code internally.

            PM is a bit of a different story, in the sense that everyone is having to wait for "perfect" (which we knew would be time-consuming and uncertain) simply because "good enough" (improved driver-based PM) didn't come along a year or two ago like we expected. We are pushing ahead with what we hope will be a pretty capable PM solution, but that was never meant to be "the next step" since we knew it would probably take a long time. I did expect PM to play out differently, but I guess that's water under the bridge now.

            With regards to PM and the, apparently, two ways of implementing it, I seem to recall Dave Airlie mentioning to you the issues with atombios. In particular, that atombios was the interface that OSS was supposed to use but that the interface wasn't actually able to achieve the desired results seemingly because it wasn't tested. Again, IIRC, he said they tried banging registers but didn't get as far as they wanted.
            Please pardon me if I misremembered this.

            Best/Liam

            Comment


            • #66
              not to beat a dead horse, but...

              Originally posted by liam View Post
              With regards to PM and the, apparently, two ways of implementing it, I seem to recall Dave Airlie mentioning to you the issues with atombios. In particular, that atombios was the interface that OSS was supposed to use but that the interface wasn't actually able to achieve the desired results seemingly because it wasn't tested. Again, IIRC, he said they tried banging registers but didn't get as far as they wanted.
              Please pardon me if I misremembered this.

              Best/Liam
              This is exactly what I mean when I say that the free software driver is hamstrung by the apparent need to keep things secret. I really wish things were different. I have been a loyal AMD customer for 10 years. Now I am in a position where I have influence over purchasing. In the last year and a half, I have deployed roughly 100 AMD computers, some desktops, some laptops, and some servers. At this point, given the performance characteristics of the AMD processors, it is difficult to recommend AMD, unless it is a Fusion processor in a laptop or a multi-core processor in a virtualization host. Unfortunately, given the power problems with the free software driver, it is NOT a good choice for a laptop running a free software OS, like Linux. AMD really needs to examine their strategy here. I realize I am small potatoes to AMD, but there are undoubtebly more people like me out there.

              -ed.

              Comment


              • #67
                Originally posted by edsdead View Post
                AMD really needs to examine their strategy here. I realize I am small potatoes to AMD, but there are undoubtebly more people like me out there.
                I think you are actually pretty typical of the customers we are trying to support with the open source efforts.

                The strategy question was pretty simple, unfortunately :

                1. Invest a huge pile of $$ and immediately redesign all of our hardware blocks so that programming information could be released safely. Given the added business that would get us relative to the cost of re-engineering all our IP we can call that one "quick death".

                2. Release all the information immediately and hope that if/when problems arise we have enough cash to fight or work around them, along with enough other products to keep us going if our GPU sales get impacted. Privately owned companies can do that but publicly-traded companies can't unless they have demonstrably high confidence in their ability to deal with whatever happens.

                3. Prioritize the IP and work through each block separately, releasing as much information as appears "safe" (admittedly a difficult determination). For problematic information, look for ways to use the blocks without exposing that information. As time & budget permits, include open source compatibility as a requirement when new hardware blocks are being designed, making it easier to expose programming information in the future.

                Note that #3 still involves an element of #2, but you are basically managing technical risk to align with the level of business risk you can afford to take rather than saying "what the heck, I can release everything now and afford the risks".

                It might not be obvious, but pretty much everyone in the industry is going with #3 out of necessity, even companies which are big enough that you might think #2 is an option for them. The downside of #3 is that it takes time, although we're fairly far through the process now.
                Last edited by bridgman; 01-05-2013, 10:26 AM.

                Comment


                • #68
                  Originally posted by bridgman View Post
                  As time & budget permits, include open source compatibility as a requirement when new hardware blocks are being designed, making it easier to expose programming information in the future.
                  How is that coming along?
                  The first time I read this was more than 4 years ago.

                  Comment


                  • #69
                    Originally posted by entropy View Post
                    How is that coming along?
                    The first time I read this was more than 4 years ago.
                    Chip design happens years before chips are released publicly and even then major redesigns often encompass several generations of evolutionary change so it takes quite a while for new hardware designs taking into account open source requirement to actually show up in released products. That said, I think things are going pretty well overall.

                    Comment


                    • #70
                      Hi Alex,

                      thanks for replying.

                      Originally posted by agd5f View Post
                      Chip design happens years before chips are released publicly and even then major redesigns often encompass several
                      generations of evolutionary change so it takes quite a while for new hardware designs taking into account open source
                      requirement to actually show up in released products.
                      Sure. Any ETA for the first products shipping with improvements in that regard?
                      I know you can't reveal details.

                      Originally posted by agd5f View Post
                      That said, I think things are going pretty well overall.
                      Sounds good. In the end this route seems to be the only way to expose (almost) all ASIC internals to the FOSS world, I guess.

                      Comment


                      • #71
                        Quote Originally Posted by bridgman View Post
                        As time & budget permits, include open source compatibility as a requirement when new hardware blocks are being designed, making it easier to expose programming information in the future.
                        -------
                        Originally posted by entropy View Post
                        How is that coming along?
                        The first time I read this was more than 4 years ago.
                        Objectively speaking it sounds like damage reduction to me.

                        Comment


                        • #72
                          Originally posted by entropy View Post
                          Any ETA for the first products shipping with improvements in that regard?
                          I know you can't reveal details.
                          We're still trying to expose support for older hardware as well, so it's probably best if we don't say anything

                          Comment


                          • #73
                            reading between the lines...

                            Originally posted by bridgman View Post
                            2. Release all the information immediately and hope that if/when problems arise we have enough cash to fight or work around them, along with enough other products to keep us going if our GPU sales get impacted. Privately owned companies can do that but publicly-traded companies can't unless they have demonstrably high confidence in their ability to deal with whatever happens.

                            3. Prioritize the IP and work through each block separately, releasing as much information as appears "safe" (admittedly a difficult determination). For problematic information, look for ways to use the blocks without exposing that information. As time & budget permits, include open source compatibility as a
                            So reading between the lines, would it be fair to say that in the current IP/patent climate, every block has to be reviewed to see if there is an IP liability in releasing information about it? Is the problem that there are so many trivial patents that it is difficult to create something new without infringing, either unknowingly or otherwise?

                            If this is the case, it seems to suggest that all of us should be focusing on patent reform. The patent system was intended to encourage both innovation and the publication of that innovation. This situation sounds seriously dysfunctional. What can we do about it?

                            -ed.

                            Comment


                            • #74
                              Originally posted by bridgman View Post
                              We're still trying to expose support for older hardware as well, so it's probably best if we don't say anything
                              You're spoiling the party.
                              It's not what I wanted to hear.

                              Albeit, the information that you didn't completely gave up on older ASICs is very welcome.

                              Comment


                              • #75
                                For reference, the USPTO's budget is based on the number of patents/trademarks granted. Their attitude is, when in doubt, issue the patent and then let the courts sort it out. Not exactly the innovation resource our founders had in mind...

                                Comment

                                Working...
                                X