Announcement

Collapse
No announcement yet.

"Ask ATI" dev thread

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Yeah, perhaps "undead" would be more appropriate

    Comment


    • Originally posted by bridgman View Post
      You may also be expecting a level of "responsiveness" from the binary driver which is only common in the open source world. We put out a new release each month, and each release includes a number of fixes / enhancements. The QA cycle for each release overlaps with the development cycle of the next release, so fixes for user issues encountered with release N are usually able to appear in release N+2 at the earliest, and typically later than that. You seem to be expecting "building from open source git master" turnaround times, saying "gee, it's been a couple of months and ohmigod the problem is still there, something must be terribly wrong !!".

      Yeah, it's a shame that we are constantly improving the driver and that the new driver code sometimes has problems in certain scenarios. It would be so much better if we didn't do that, right ? Do you remember *why* the new accel code was implemented in the first place (X server patch was removed which made performance suck with otherwise fine XAA acceleration when running a compositor) ?

      Panix, here's my problem. You don't seem to actually read anyone's responses to you, and every new post you make asks the same questions that have already been answered multiple times. After a while people are going to ignore you and stop answering your questions unless you start making an effort to listen.
      I obviously read the responses. You stated that the two teams work independently and I assume part of the reason is because there is information that the other team shouldn't have. Another reason might be that the material or information needed is different and not interchangeable?

      If the two teams are independent, how come it seems like the developmental progress is so similar? The 3D binary state has a reputation of constantly being broken. It's only the open source zealots that don't care so much about that. I figure, the binary driver should be at least up to the Windows state or close. Does ATI/AMD develop their binary driver differently than Nvidia? Is there a reason for such a discrepancy? I guess this question will be ignored if I'm being ignored for such questions nowadays. I'm just trying to understand why there's this huge gap. I don't dispute there's improvements; I'm wondering why such a lengthy period of time for the progress? I'm not talking about the open source side here. I understand the obstacles and need for community involvement/feedback with that.

      Yes, I'm reading about some progressions in the Gallium3D driver so is this development slowing down the overall progress because it's a different method? Or?

      Anyway, I just wanted a better understanding but if my questions are being too repetitive, I'll step aside (regarding questions about the video drivers).

      Comment


      • Originally posted by Panix View Post
        I obviously read the responses. You stated that the two teams work independently and I assume part of the reason is because there is information that the other team shouldn't have. Another reason might be that the material or information needed is different and not interchangeable?
        The main reason is that the two driver stacks are plain different, so even if the hardware itself is the same, the code they're having to write is sufficiently different that working together would be chaotic.

        Aside from that, I would bet that there are patent issues as well, or trade secret issues. AMD-specific open source graphics driver code that comes from AMD developers goes through a legal review process before it is released to the general public (presumably, code that is agnostic to any particular hardware driver does not need review, or less stringent review). The open source drivers are supposed to be a "clean room" implementation, using documentation that has been released to the public, thereby ensuring that none of the trade secrets are accidentally leaked (because the AMD open source devs don't know about them!)

        And then there's market share. It doesn't make business sense to devote any portion of their proprietary driver teams toward the open source drivers, because there's much, much less demand for them. The users that bring them revenue don't insist on freedom; the workstation graphics guys (the likes of Pixar, AutoCAD users, etc) are completely indifferent to whether their operating system or drivers are open source. They just want it to work, and to work well, which is why they are using Linux in the first place. In the consumer market, most of their customers run Windows or Mac, so the financial incentive to develop open source drivers is simply absent.

        Originally posted by Panix View Post
        If the two teams are independent, how come it seems like the developmental progress is so similar? The 3D binary state has a reputation of constantly being broken. It's only the open source zealots that don't care so much about that. I figure, the binary driver should be at least up to the Windows state or close.
        It may have a bad reputation, but I'm using it right now and it's not broken. It has bugs and a few minor performance issues, but nothing that really interrupts my work. I use two OpenGL 2.1 applications (Savage 2 and Second Life) on a regular basis, and the performance is what I would expect. Both 2d and 3d rendering is mostly correct and performant, except that after a long uptime (days) the 2d rendering in firefox slows down a bit -- it may be a firefox issue though.

        And where do you get off saying that the "open source zealots" don't care about whether their driver is broken or not? People who insist on open source drivers most certainly do care about stability, performance, and features. It's just that we've stopped expecting the features to be there, because they simply aren't. We can still insist on stability in any case, and we can still expect good performance for the features that are implemented; by and large, these two goals at least can be (and are being) attained in the open source stack. Getting Mesa to correctly render an arbitrary OpenGL 2.1 app on r800 is still a ways out, but people who insist on freedom are either stuck with fglrx until the open source drivers are ready; or, they're just doing without proper 3d accel until then. That doesn't mean we don't "care" about these things; it's just that we have to do something else than sit around expecting the features when they aren't there.

        Originally posted by Panix View Post
        Does ATI/AMD develop their binary driver differently than Nvidia? Is there a reason for such a discrepancy? I guess this question will be ignored if I'm being ignored for such questions nowadays. I'm just trying to understand why there's this huge gap. I don't dispute there's improvements; I'm wondering why such a lengthy period of time for the progress? I'm not talking about the open source side here. I understand the obstacles and need for community involvement/feedback with that.
        From what I can tell, ATI's proprietary driver became more similar to Nvidia's at some point in the mid-2000s when they unified their driver core across all their supported platforms. Since then, the ATI proprietary driver core is more or less a write once, compile anywhere affair; they ship 95% of the same code (albeit compiled differently for each platform) for the Windows and Linux drivers at least. This is, from a distance, similar to the process that Nvidia goes through. Now, in terms of the actual implementation of their stack, I am sure that the two companies have many uncanny similarities (some accidental, others intentional), and many differences which have arisen due to the two companies being so secretive and not sharing information (so they just wind up doing things differently).

        Originally posted by Panix View Post
        Yes, I'm reading about some progressions in the Gallium3D driver so is this development slowing down the overall progress because it's a different method? Or?
        The Gallium3d driver is part of the open source stack -- it's confusing that you are now discussing this when you devoted the rest of your article to the proprietary driver. I can tell you with 100% certainty that Gallium3d's existence (or non-existence) has had exactly zero impact upon the fglrx proprietary driver. They are as separate as separate gets. But on the open source side, yes, there has been a division of labor between the "classic" mesa drivers and the Gallium drivers. Having people working on both classic and gallium drivers for the same hardware has reduced the amount of able manpower that could be devoted to a single, correct, ideal implementation. Most people acknowledge that the ideal implementation (or as close as we're gonna get) is with Gallium, yet there continues to be intensive work on the classic drivers.

        Why, you ask? Well, because it's been a period of transition. Gallium is still pretty new compared to the classic mesa architecture. Gallium went through years of having its API designed and re-designed; I watched the repos go from gallium 0.1, to 0.2, to 0.3, to 0.4 since 2007 or so. Each of these was a major rework of the architecture to try and make everyone happy. I doubt they're "done" with Gallium 0.4, but it's at least serviceable enough for people to start making real progress on drivers.

        Gallium 0.4, then, is a pretty new thing, and it's clear that the major stakeholders (Intel, ATI, the Nouveau guys) haven't really invested much time into writing Gallium hardware drivers until 0.4. This is called churn -- when existing code is being dug up, torn out, and rewritten. There has been an awful lot of churn in open source graphics land, both the classic and the gallium stack, since 2005 or so.

        Churn is great for progress overall because it makes the architecture better positioned to have good drivers written on top of it. But churn is the antithesis of user-visible progress. Major churn, the likes of which we've seen with DRI2, KMS and Gallium, de-stabilizes the whole stack, and forces significant rewrites of hardware-specific code. So in some limited sense, this type of churn results in us going backwards in the short term, in order to enable us to go even further forwards in the long term.

        Why do we need churn? Because writing graphics drivers is hard. Coming up with a framework upon which three or more different hardware vendors can implement a series of vastly different graphics chips is hard. Not only is it a really hard problem, but there aren't very many people working on it. The proprietary driver camps at Nvidia and ATI have it much easier, because (1) they only have to design for their own company's hardware, and (2) they have many, many times the manpower dedicated full-time to working on the driver.

        Comment


        • Originally posted by allquixotic View Post
          The main reason is that the two driver stacks are plain different, so even if the hardware itself is the same, the code they're having to write is sufficiently different that working together would be chaotic.

          Aside from that, I would bet that there are patent issues as well, or trade secret issues. AMD-specific open source graphics driver code that comes from AMD developers goes through a legal review process before it is released to the general public (presumably, code that is agnostic to any particular hardware driver does not need review, or less stringent review). The open source drivers are supposed to be a "clean room" implementation, using documentation that has been released to the public, thereby ensuring that none of the trade secrets are accidentally leaked (because the AMD open source devs don't know about them!)
          I appreciate this explanation! I assumed this much but well explained!

          Originally posted by allquixotic View Post
          And then there's market share. It doesn't make business sense to devote any portion of their proprietary driver teams toward the open source drivers, because there's much, much less demand for them. The users that bring them revenue don't insist on freedom; the workstation graphics guys (the likes of Pixar, AutoCAD users, etc) are completely indifferent to whether their operating system or drivers are open source. They just want it to work, and to work well, which is why they are using Linux in the first place. In the consumer market, most of their customers run Windows or Mac, so the financial incentive to develop open source drivers is simply absent.
          Are they satisfied with the binary driver then? I guess the architecture is quite different than, say Evergreen, for instance? So, resources are being invested in supporting these FireGL cards in particular?

          Originally posted by allquixotic View Post
          It may have a bad reputation, but I'm using it right now and it's not broken.

          And where do you get off saying that the "open source zealots" don't care about whether their driver is broken or not? People who insist on open source drivers most certainly do care about stability, performance, and features. It's just that we've stopped expecting the features to be there, because they simply aren't.... or, they're just doing without proper 3d accel until then. That doesn't mean we don't "care" about these things; it's just that we have to do something else than sit around expecting the features when they aren't there.
          I meant that they are not objecting much if the R800 Catalyst drivers are up to speed since the FOSS drivers are the priority.

          Originally posted by allquixotic View Post
          Since then, the ATI proprietary driver core is more or less a write once, compile anywhere affair; they ship 95% of the same code (albeit compiled differently for each platform) for the Windows and Linux drivers at least. This is, from a distance, similar to the process that Nvidia goes through. Now, in terms of the actual implementation of their stack, I am sure that the two companies have many uncanny similarities (some accidental, others intentional), and many differences which have arisen due to the two companies being so secretive and not sharing information (so they just wind up doing things differently).
          The impression is that Nvidia does a better overall job at implementing the binary drive in Linux than ATI does. If the two companies have 95% of the code at their disposal, there seems to be a disproportionate result if you compare the two via benchmarks, user experience etc. etc. I was just curious what the reason was.

          Originally posted by allquixotic View Post
          The Gallium3d driver is part of the open source stack -- it's confusing that you are now discussing this when you devoted the rest of your article to the proprietary driver. I can tell you with 100% certainty that Gallium3d's existence (or non-existence) has had exactly zero impact upon the fglrx proprietary driver. They are as separate as separate gets.
          I didn't express my inquiry well. I couldn't edit my post so I had to wait until a reply. I was making a contrast between:
          A) ATI binary and open source
          B) ATI binary and Nvidia binary

          I made a few references between ATI binary/FOSS and the Nvidia driver but mostly A) and B).

          The contrast regarding Gallium3d I was trying to make was the progress of 3D with OSS and 3D with ATI binary. There are some strides made in both but I figured the more prominent strides would be made via the binary driver but it is still perceived as extremely lacking when compared to the Nvidia driver. I just thought the contrast was intriguing.

          Originally posted by allquixotic View Post
          Because writing graphics drivers is hard. Coming up with a framework upon which three or more different hardware vendors can implement a series of vastly different graphics chips is hard. Not only is it a really hard problem, but there aren't very many people working on it. The proprietary driver camps at Nvidia and ATI have it much easier, because (1) they only have to design for their own company's hardware, and (2) they have many, many times the manpower dedicated full-time to working on the driver.
          I have few concerns on the OSS drivers. I understand the problems and give due props to all the developers and the ideal. I don't think any of my complaints have ever been directed at open source drivers. I might question how much resources are invested compared to the assertions and praising that goes on which just means I would like more done. But, my concerns are with the binary driver because they are using Windows code or at least, if you compare to Nvidia-Windows and Nvidia-Linux, there seems to be less of a gap in user capabilities. They also have workstation cards. I guess the resources are a problem as they're a smaller company? But, they keep up in Windows-driver development so I thought the gap shouldn't be as large.

          Anyway, thanks for the reply! Truly informative and interesting. It's much appreciated!

          Comment


          • Originally posted by Panix View Post
            I obviously read the responses. You stated that the two teams work independently and I assume part of the reason is because there is information that the other team shouldn't have. Another reason might be that the material or information needed is different and not interchangeable?
            I don't really understand the question. The fglrx driver is developed by a team of AMD employees, while the open source driver is developed by a team of developers who mostly do *not* work for AMD. The two drivers also have completely different target markets and different code sharing objectives. The open source driver shares code with other open source drivers (eg Intel, Nouveau, llvmpipe etc..) while the fglrx driver shares code with other AMD proprietary drivers.

            Comment


            • Originally posted by bridgman View Post
              I don't really understand the question. The fglrx driver is developed by a team of AMD employees, while the open source driver is developed by a team of developers who mostly do *not* work for AMD. The two drivers also have completely different target markets and different code sharing objectives. The open source driver shares code with other open source drivers (eg Intel, Nouveau, llvmpipe etc..) while the fglrx driver shares code with other AMD proprietary drivers.
              And we all share the pain for buggy or incomplete drivers

              Comment


              • Reviving this thread..

                I've read about AMD's new Windows drivers enabling a shader-based AA solution (MLAA).

                I've also read the research on it, it basically works everywhere regardless of rendering tech. It also looks about as good as 8x MSAA to my eyes, in some parts better, in some parts worse. It's also many times faster than MSAA.

                I believe it would work on cards r600 and up, even though the Win driver only enables it for hd6xxx.

                Would AMD consider open-sourcing the shader?

                Comment


                • It would be rather awesome to have fast, good quality AA in all apps on Linux, despite not having app support. Something that could be enabled in .drirc maybe.

                  Comment


                  • Would love to see H.264 L4.1 support on linux.

                    Comment


                    • unless there's several AA-techniques under the acronym MLAA, you could try these sources:

                      http://igm.univ-mlv.fr/~biri/mlaa-gpu/

                      cleanup and integration into mesa is left as an exercise to the reader

                      Comment


                      • Yes, I have checked that out. The shaders do not compile on Mesa, and while my limited shader skills could make them compile, the result was garbage.

                        The basic tech is the same, but the implementations differ. I believe AMD's implementation is one of the better ones.

                        Comment


                        • i'm going to post here some list of features I'd like to see in next Catalyst Drivers:

                          a) Properly H264 / VC1 XvBA VAAPI acceleration, because since Catalyst 10.7, I'm not able to decode H264 videos with VAAPI acceleration, and some newer cards still have (some) problems using VAAPI (by contrast, most modern nVidia cards already have properly video acceleration in Linux (using VDPAU)).

                          b) Kernel Mode-Setting, because I think this could give a slightly performance advantage for Catalyst Linux Drivers (Windows drivers already use KMS for a long while...).

                          c) Experimental support for Wayland Display Server, because I think it has the potential to replace Xorg in a middle/long term...

                          Cheers

                          Comment


                          • I'd prefer to see all features/perfs of Catalyst in the free radeon driver.

                            Comment


                            • Bridgman,

                              I'd like to thank you and the rest of the AMD team for working hard to get your Linux/Unix drivers to function at such a good level. I remember struggling for weeks several years ago, trying in vain to get my AMD integrated graphics working properly in Ubuntu.

                              I try it again on a laptop I inherited, and it works great the first time! I am amazed. Well done. I wish you and AMD continued success, and will be supporting AMD as much as I can in future for your commitment to open source.

                              Comment

                              Working...
                              X