Announcement

Collapse
No announcement yet.

"Ask ATI" dev thread

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by Panix View Post
    Are you serious? Are you not reading the threads in the AMD/ATI Linux section? It's not just 'lack of features' complaints, there's issues like black windows and other complaints. No responses for those from AMD/ATI.

    Are those bugs or something else? Because I don't want to be one of those users, I choose to question these issues before I spend the money!

    I have gone to the Fedora, Ubuntu and OpenSUSE forums, some of the complaints are the same. It seems the issues still exist.
    We are obviously not reading the same forums. There was a fair amount of discussion after the new 2D code was enabled, understanding where bugs were happening, asking for good bug reports, and explaining how to force the older XAA code in cases where the new code was not working well. If you are expecting a personal response to every post about every bug on every forum, then I don't think any of the hardware vendors are going to make you happy.

    You may also be expecting a level of "responsiveness" from the binary driver which is only common in the open source world. We put out a new release each month, and each release includes a number of fixes / enhancements. The QA cycle for each release overlaps with the development cycle of the next release, so fixes for user issues encountered with release N are usually able to appear in release N+2 at the earliest, and typically later than that. You seem to be expecting "building from open source git master" turnaround times, saying "gee, it's been a couple of months and ohmigod the problem is still there, something must be terribly wrong !!".

    Originally posted by Panix View Post
    It's sad the resources seem to be so low. A shame good hardware is wasted.
    Yeah, it's a shame that we are constantly improving the driver and that the new driver code sometimes has problems in certain scenarios. It would be so much better if we didn't do that, right ? Do you remember *why* the new accel code was implemented in the first place (X server patch was removed which made performance suck with otherwise fine XAA acceleration when running a compositor) ?

    Originally posted by Panix View Post
    I suppose the aim is for optimal open source drivers as much is allowed via the licensing/copyright? But, that is certainly some time to come as they're also said not to be usable yet. For the R800 cards, think of someone who spent $300+ on an HD 5850 and they have issues in the binary driver and can't use the FOSS driver yet because 3D is not available yet or something to that effect? It's acceptable the FOSS drivers are not ready yet as it takes time but for binary drivers to be shoddy since last Sept. 09, that's incredulous, when Windoze drivers work fine (or at least acceptable) from eons ago!
    Panix, here's my problem. You don't seem to actually read anyone's responses to you, and every new post you make asks the same questions that have already been answered multiple times. After a while people are going to ignore you and stop answering your questions unless you start making an effort to listen.
    Test signature

    Comment


    • Originally posted by Panix View Post
      This is also the 'ask an AMD dev' section so I'm not even sure why you're objecting to my last post. They can answer or not, right?
      This is actually a "dead thread" that was started for a planned article (Michael working with Matthew Tippett), collecting questions for Matthew and others to answer. The proposed article more or less died when Matthew left AMD, and the thread should probably be un-stickied to avoid confusion.
      Test signature

      Comment


      • Observing number of replies, this does not seem like a dead thread

        Comment


        • Yeah, perhaps "undead" would be more appropriate
          Test signature

          Comment


          • Originally posted by bridgman View Post
            You may also be expecting a level of "responsiveness" from the binary driver which is only common in the open source world. We put out a new release each month, and each release includes a number of fixes / enhancements. The QA cycle for each release overlaps with the development cycle of the next release, so fixes for user issues encountered with release N are usually able to appear in release N+2 at the earliest, and typically later than that. You seem to be expecting "building from open source git master" turnaround times, saying "gee, it's been a couple of months and ohmigod the problem is still there, something must be terribly wrong !!".

            Yeah, it's a shame that we are constantly improving the driver and that the new driver code sometimes has problems in certain scenarios. It would be so much better if we didn't do that, right ? Do you remember *why* the new accel code was implemented in the first place (X server patch was removed which made performance suck with otherwise fine XAA acceleration when running a compositor) ?

            Panix, here's my problem. You don't seem to actually read anyone's responses to you, and every new post you make asks the same questions that have already been answered multiple times. After a while people are going to ignore you and stop answering your questions unless you start making an effort to listen.
            I obviously read the responses. You stated that the two teams work independently and I assume part of the reason is because there is information that the other team shouldn't have. Another reason might be that the material or information needed is different and not interchangeable?

            If the two teams are independent, how come it seems like the developmental progress is so similar? The 3D binary state has a reputation of constantly being broken. It's only the open source zealots that don't care so much about that. I figure, the binary driver should be at least up to the Windows state or close. Does ATI/AMD develop their binary driver differently than Nvidia? Is there a reason for such a discrepancy? I guess this question will be ignored if I'm being ignored for such questions nowadays. I'm just trying to understand why there's this huge gap. I don't dispute there's improvements; I'm wondering why such a lengthy period of time for the progress? I'm not talking about the open source side here. I understand the obstacles and need for community involvement/feedback with that.

            Yes, I'm reading about some progressions in the Gallium3D driver so is this development slowing down the overall progress because it's a different method? Or?

            Anyway, I just wanted a better understanding but if my questions are being too repetitive, I'll step aside (regarding questions about the video drivers).

            Comment


            • Originally posted by Panix View Post
              I obviously read the responses. You stated that the two teams work independently and I assume part of the reason is because there is information that the other team shouldn't have. Another reason might be that the material or information needed is different and not interchangeable?
              The main reason is that the two driver stacks are plain different, so even if the hardware itself is the same, the code they're having to write is sufficiently different that working together would be chaotic.

              Aside from that, I would bet that there are patent issues as well, or trade secret issues. AMD-specific open source graphics driver code that comes from AMD developers goes through a legal review process before it is released to the general public (presumably, code that is agnostic to any particular hardware driver does not need review, or less stringent review). The open source drivers are supposed to be a "clean room" implementation, using documentation that has been released to the public, thereby ensuring that none of the trade secrets are accidentally leaked (because the AMD open source devs don't know about them!)

              And then there's market share. It doesn't make business sense to devote any portion of their proprietary driver teams toward the open source drivers, because there's much, much less demand for them. The users that bring them revenue don't insist on freedom; the workstation graphics guys (the likes of Pixar, AutoCAD users, etc) are completely indifferent to whether their operating system or drivers are open source. They just want it to work, and to work well, which is why they are using Linux in the first place. In the consumer market, most of their customers run Windows or Mac, so the financial incentive to develop open source drivers is simply absent.

              Originally posted by Panix View Post
              If the two teams are independent, how come it seems like the developmental progress is so similar? The 3D binary state has a reputation of constantly being broken. It's only the open source zealots that don't care so much about that. I figure, the binary driver should be at least up to the Windows state or close.
              It may have a bad reputation, but I'm using it right now and it's not broken. It has bugs and a few minor performance issues, but nothing that really interrupts my work. I use two OpenGL 2.1 applications (Savage 2 and Second Life) on a regular basis, and the performance is what I would expect. Both 2d and 3d rendering is mostly correct and performant, except that after a long uptime (days) the 2d rendering in firefox slows down a bit -- it may be a firefox issue though.

              And where do you get off saying that the "open source zealots" don't care about whether their driver is broken or not? People who insist on open source drivers most certainly do care about stability, performance, and features. It's just that we've stopped expecting the features to be there, because they simply aren't. We can still insist on stability in any case, and we can still expect good performance for the features that are implemented; by and large, these two goals at least can be (and are being) attained in the open source stack. Getting Mesa to correctly render an arbitrary OpenGL 2.1 app on r800 is still a ways out, but people who insist on freedom are either stuck with fglrx until the open source drivers are ready; or, they're just doing without proper 3d accel until then. That doesn't mean we don't "care" about these things; it's just that we have to do something else than sit around expecting the features when they aren't there.

              Originally posted by Panix View Post
              Does ATI/AMD develop their binary driver differently than Nvidia? Is there a reason for such a discrepancy? I guess this question will be ignored if I'm being ignored for such questions nowadays. I'm just trying to understand why there's this huge gap. I don't dispute there's improvements; I'm wondering why such a lengthy period of time for the progress? I'm not talking about the open source side here. I understand the obstacles and need for community involvement/feedback with that.
              From what I can tell, ATI's proprietary driver became more similar to Nvidia's at some point in the mid-2000s when they unified their driver core across all their supported platforms. Since then, the ATI proprietary driver core is more or less a write once, compile anywhere affair; they ship 95% of the same code (albeit compiled differently for each platform) for the Windows and Linux drivers at least. This is, from a distance, similar to the process that Nvidia goes through. Now, in terms of the actual implementation of their stack, I am sure that the two companies have many uncanny similarities (some accidental, others intentional), and many differences which have arisen due to the two companies being so secretive and not sharing information (so they just wind up doing things differently).

              Originally posted by Panix View Post
              Yes, I'm reading about some progressions in the Gallium3D driver so is this development slowing down the overall progress because it's a different method? Or?
              The Gallium3d driver is part of the open source stack -- it's confusing that you are now discussing this when you devoted the rest of your article to the proprietary driver. I can tell you with 100% certainty that Gallium3d's existence (or non-existence) has had exactly zero impact upon the fglrx proprietary driver. They are as separate as separate gets. But on the open source side, yes, there has been a division of labor between the "classic" mesa drivers and the Gallium drivers. Having people working on both classic and gallium drivers for the same hardware has reduced the amount of able manpower that could be devoted to a single, correct, ideal implementation. Most people acknowledge that the ideal implementation (or as close as we're gonna get) is with Gallium, yet there continues to be intensive work on the classic drivers.

              Why, you ask? Well, because it's been a period of transition. Gallium is still pretty new compared to the classic mesa architecture. Gallium went through years of having its API designed and re-designed; I watched the repos go from gallium 0.1, to 0.2, to 0.3, to 0.4 since 2007 or so. Each of these was a major rework of the architecture to try and make everyone happy. I doubt they're "done" with Gallium 0.4, but it's at least serviceable enough for people to start making real progress on drivers.

              Gallium 0.4, then, is a pretty new thing, and it's clear that the major stakeholders (Intel, ATI, the Nouveau guys) haven't really invested much time into writing Gallium hardware drivers until 0.4. This is called churn -- when existing code is being dug up, torn out, and rewritten. There has been an awful lot of churn in open source graphics land, both the classic and the gallium stack, since 2005 or so.

              Churn is great for progress overall because it makes the architecture better positioned to have good drivers written on top of it. But churn is the antithesis of user-visible progress. Major churn, the likes of which we've seen with DRI2, KMS and Gallium, de-stabilizes the whole stack, and forces significant rewrites of hardware-specific code. So in some limited sense, this type of churn results in us going backwards in the short term, in order to enable us to go even further forwards in the long term.

              Why do we need churn? Because writing graphics drivers is hard. Coming up with a framework upon which three or more different hardware vendors can implement a series of vastly different graphics chips is hard. Not only is it a really hard problem, but there aren't very many people working on it. The proprietary driver camps at Nvidia and ATI have it much easier, because (1) they only have to design for their own company's hardware, and (2) they have many, many times the manpower dedicated full-time to working on the driver.

              Comment


              • Originally posted by allquixotic View Post
                The main reason is that the two driver stacks are plain different, so even if the hardware itself is the same, the code they're having to write is sufficiently different that working together would be chaotic.

                Aside from that, I would bet that there are patent issues as well, or trade secret issues. AMD-specific open source graphics driver code that comes from AMD developers goes through a legal review process before it is released to the general public (presumably, code that is agnostic to any particular hardware driver does not need review, or less stringent review). The open source drivers are supposed to be a "clean room" implementation, using documentation that has been released to the public, thereby ensuring that none of the trade secrets are accidentally leaked (because the AMD open source devs don't know about them!)
                I appreciate this explanation! I assumed this much but well explained!

                Originally posted by allquixotic View Post
                And then there's market share. It doesn't make business sense to devote any portion of their proprietary driver teams toward the open source drivers, because there's much, much less demand for them. The users that bring them revenue don't insist on freedom; the workstation graphics guys (the likes of Pixar, AutoCAD users, etc) are completely indifferent to whether their operating system or drivers are open source. They just want it to work, and to work well, which is why they are using Linux in the first place. In the consumer market, most of their customers run Windows or Mac, so the financial incentive to develop open source drivers is simply absent.
                Are they satisfied with the binary driver then? I guess the architecture is quite different than, say Evergreen, for instance? So, resources are being invested in supporting these FireGL cards in particular?

                Originally posted by allquixotic View Post
                It may have a bad reputation, but I'm using it right now and it's not broken.

                And where do you get off saying that the "open source zealots" don't care about whether their driver is broken or not? People who insist on open source drivers most certainly do care about stability, performance, and features. It's just that we've stopped expecting the features to be there, because they simply aren't.... or, they're just doing without proper 3d accel until then. That doesn't mean we don't "care" about these things; it's just that we have to do something else than sit around expecting the features when they aren't there.
                I meant that they are not objecting much if the R800 Catalyst drivers are up to speed since the FOSS drivers are the priority.

                Originally posted by allquixotic View Post
                Since then, the ATI proprietary driver core is more or less a write once, compile anywhere affair; they ship 95% of the same code (albeit compiled differently for each platform) for the Windows and Linux drivers at least. This is, from a distance, similar to the process that Nvidia goes through. Now, in terms of the actual implementation of their stack, I am sure that the two companies have many uncanny similarities (some accidental, others intentional), and many differences which have arisen due to the two companies being so secretive and not sharing information (so they just wind up doing things differently).
                The impression is that Nvidia does a better overall job at implementing the binary drive in Linux than ATI does. If the two companies have 95% of the code at their disposal, there seems to be a disproportionate result if you compare the two via benchmarks, user experience etc. etc. I was just curious what the reason was.

                Originally posted by allquixotic View Post
                The Gallium3d driver is part of the open source stack -- it's confusing that you are now discussing this when you devoted the rest of your article to the proprietary driver. I can tell you with 100% certainty that Gallium3d's existence (or non-existence) has had exactly zero impact upon the fglrx proprietary driver. They are as separate as separate gets.
                I didn't express my inquiry well. I couldn't edit my post so I had to wait until a reply. I was making a contrast between:
                A) ATI binary and open source
                B) ATI binary and Nvidia binary

                I made a few references between ATI binary/FOSS and the Nvidia driver but mostly A) and B).

                The contrast regarding Gallium3d I was trying to make was the progress of 3D with OSS and 3D with ATI binary. There are some strides made in both but I figured the more prominent strides would be made via the binary driver but it is still perceived as extremely lacking when compared to the Nvidia driver. I just thought the contrast was intriguing.

                Originally posted by allquixotic View Post
                Because writing graphics drivers is hard. Coming up with a framework upon which three or more different hardware vendors can implement a series of vastly different graphics chips is hard. Not only is it a really hard problem, but there aren't very many people working on it. The proprietary driver camps at Nvidia and ATI have it much easier, because (1) they only have to design for their own company's hardware, and (2) they have many, many times the manpower dedicated full-time to working on the driver.
                I have few concerns on the OSS drivers. I understand the problems and give due props to all the developers and the ideal. I don't think any of my complaints have ever been directed at open source drivers. I might question how much resources are invested compared to the assertions and praising that goes on which just means I would like more done. But, my concerns are with the binary driver because they are using Windows code or at least, if you compare to Nvidia-Windows and Nvidia-Linux, there seems to be less of a gap in user capabilities. They also have workstation cards. I guess the resources are a problem as they're a smaller company? But, they keep up in Windows-driver development so I thought the gap shouldn't be as large.

                Anyway, thanks for the reply! Truly informative and interesting. It's much appreciated!

                Comment


                • Originally posted by Panix View Post
                  I obviously read the responses. You stated that the two teams work independently and I assume part of the reason is because there is information that the other team shouldn't have. Another reason might be that the material or information needed is different and not interchangeable?
                  I don't really understand the question. The fglrx driver is developed by a team of AMD employees, while the open source driver is developed by a team of developers who mostly do *not* work for AMD. The two drivers also have completely different target markets and different code sharing objectives. The open source driver shares code with other open source drivers (eg Intel, Nouveau, llvmpipe etc..) while the fglrx driver shares code with other AMD proprietary drivers.
                  Test signature

                  Comment


                  • Originally posted by bridgman View Post
                    I don't really understand the question. The fglrx driver is developed by a team of AMD employees, while the open source driver is developed by a team of developers who mostly do *not* work for AMD. The two drivers also have completely different target markets and different code sharing objectives. The open source driver shares code with other open source drivers (eg Intel, Nouveau, llvmpipe etc..) while the fglrx driver shares code with other AMD proprietary drivers.
                    And we all share the pain for buggy or incomplete drivers

                    Comment


                    • Reviving this thread..

                      I've read about AMD's new Windows drivers enabling a shader-based AA solution (MLAA).

                      I've also read the research on it, it basically works everywhere regardless of rendering tech. It also looks about as good as 8x MSAA to my eyes, in some parts better, in some parts worse. It's also many times faster than MSAA.

                      I believe it would work on cards r600 and up, even though the Win driver only enables it for hd6xxx.

                      Would AMD consider open-sourcing the shader?

                      Comment

                      Working...
                      X