Announcement

Collapse
No announcement yet.

ATI R600g Gains Mip-Map, Face Culling Support

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #46
    Originally posted by bridgman View Post
    All good questions, although I've answered them all a few times before. Maybe I need to write a book
    Thanks for your answers. A detailed FAQ would probably be helpful, maybe with a sticky post here pointing to it, so you could then just point people to it, and chastise them for not having read it beforehand

    Originally posted by bridgman View Post
    Somewhere on the order of "a few percent", certainly less than 5%. The primary reason for the performance difference between proprietary and open driver stacks is that the proprietary drivers get maybe 50x the development resources because the work is shared across the entire PC market rather than being specific to one OS.
    Yes, and that's why it would be nice to have all that work benefit the open driver at least partially, if it were somehow possible to arrange.

    Originally posted by bridgman View Post
    On the DRM side, the problem is that we have to protect not only the bits of code which are actually doing DRM-ey things (which are as small as you expect) but also all of the code *below* them (ie between that code and the hardware) in order to protect against attackers interposing code between the DRM-specific bits and the hardware.

    The 3D part of the driver uses all many of the same lower level bits (surface management, memory management, command submission etc..).
    OK, but what if you released both an open driver without the DRM, and a monolithic blob with DRM?

    This would only be a problem if contracts specifically said that the source code for any part of the whole driver including DRM cannot be publicly released (which seems pretty draconian, but I guess possible).

    Originally posted by bridgman View Post
    Yep, on the 3D side the reasons for not opening the code are more related to competition than to DRM. Two main issues :

    1. We regard our shader compiler technology as part of the "secret sauce" which allows us to use a VLIW hardware approach which, in turn, gives us advantages in performance vs transistor count / die size / cost.
    I thought the idea of nVidia or Intel moving to a VLIW architecture and taking advantage of your ideas/code was considered unrealistic, but evidently you don't, and might be right at that.

    Originally posted by bridgman View Post
    2. If you limit the discussion to our major competitor in the discrete GPU market, we both have roughly the same features and performance but I'm sure our code has clever ideas that our competitor doesn't have and that their code has clever ideas we don't have. If one competitor opens their code while the other does not, that tips the balance subtly (again, probably only a few percent but those few percent are worth a lot of $$).
    Yes, you would face the risk of an hypothetical slight loss of your competitive edge, which could be perhaps compensated by other advantages.

    Another theoretical option would be to attempt to make an agreement with nVidia to both open your drivers, although I suppose that is very unrealistic.

    Yep, there are definitely benefits, but again the number of outside developers who can *and* are likely to work on the drivers is sufficiently small that it's *really* hard to make the costs and benefits work out. We have driver developers with source access working side-by-side with outside ISV developers already so I *think* the largest potential area of gain from opening the driver is already covered.

    Opening code is about a thousand times as cumbersome as a source license agreement, and maybe 10 thousand times as cumbersome as an NDA
    It's more cumbersome for you, but much less cumbersome for third parties, which means you get more contributions and more developer mindshare. Of course whether this is significant is debatable.

    There is also the Linux market advantage, where a top performing open driver could possibly lead to near total dominance of the Linux enthusiast market, and workstations users might be influenced too once RHEL and other enterprise distribution start advertising better or exclusive support for ATI and Intel cards as opposed to nVidia ones (and the Intel ones are of course useless for the workstation market).

    Right now people seem to still buy nVidia cards for Linux use because the open drivers are not competitive and they either deem fglrx inferior to nvidia, or consider them equivalent and go nVidia for other reasons.
    An open fglrx might change this, and make ATI cards a must, while with the current strategy, this won't happen at least for a few years, if at all (i.e. not until Mesa supports the latest OpenGL release and performance is almost that of fglrx).

    Obviously it's a much smaller market than Windows, but being recommended by Linux users might somewhat affect the Windows market too, and the fact that Linux is likely more prevalent among developers and GPU compute users might increase its importance.

    Anyway, I guess you already analyzed this, and decided the possible gains would be lower than the risk of reducing your competitive advantage and the cost of the effort.

    Originally posted by bridgman View Post
    I don't really see much chance of opening up the Windows parts of the code, but we would like to open up some more of the Linux-specific bits over time.
    I think it would be nice to have the closed source 3D userspace optionally work with the open DRM driver, and an open X driver, enhancing them if necessary beforehand (possibly with Linux-specific code from fglrx if applicable).

    This should be doable (except possibly for the video DRM stuff, which I think you could just not support in this mode of operation), and would eliminate a lot of the major issues with a closed 3D driver like trouble with newer kernels and X servers, system crashes, security issues, kernel taint, and make it very easy to switch between fglrx and the Mesa/Gallium stack, and even use them side by side.
    It would be an unique advantage over nVidia and Nouveau and would also possibly allow you to eventually drop the kernel and X driver, and focus only on the proprietary OpenGL/OpenCL userspace.

    Comment


    • #47
      All good questions, although I've answered them all a few times before. Maybe I need to write a book
      Wikis are good for this sort of thing.



      @Agdr:

      ATI survives based on the superiority and competitiveness of their proprietary driver. Nvidia in the past almost killed them off back in the day because Nvidia came out with a new driver architecture that netted them as little as a 10-15% boost in performance on benchmarks and nobody in ATI is going to forget that.

      It's going to take a huge amount of effort to prove to ATI that spending resources on being open is the way to go. HUGE. This is something that I do not see happening any time this decade. Badgering poor Bridgeman is not going to make any difference.

      Hell, I expect a complete re-architecture from bottom to top of the PC architecture and the elimination of GPU as a discrete processor long before ATI or Nvidia will be convinced that their source code 'secret sauce' is not nearly as precious as they think it is.

      Comment


      • #48
        Originally posted by V!NCENT View Post
        @Agrd,
        gallium is not like Direct3D at al. Not even remotely close.

        Gallium is not a driver, nor a 3D library. Gallium is a layer for modern GPU's. That layer is a sort of API. On top of that API functionality can be written, like OpenGL, OpenCL, DirectX, Glide, X.org, vector graphics acceleration, database crawlers, number crushers and whatever you can think of, which is why you _do NOT_ want fglrx take its place.
        Below that layer is a driver that exposes that Gallium API and _that_ is where the floss drivers kick in.
        So it sounds more like DirectX than Direct3D (except that DirectX still handles lots of other stuff too than Gallium3D). DirectCompute on DirectX vs OpenCL on Gallium3D, Direct3D on DirectX vs OpenGL on Gallium3D etc.

        Comment


        • #49
          Originally posted by nanonyme View Post
          So it sounds more like DirectX than Direct3D (except that DirectX still handles lots of other stuff too than Gallium3D). DirectCompute on DirectX vs OpenCL on Gallium3D, Direct3D on DirectX vs OpenGL on Gallium3D etc.
          Wrong thinking. Direct3D in DirectX is like pipe_context in Gallium. DirectShow in DirectX is like pipe_video_context in Gallium. A 3D game engine atop DirectX is like OpenGL atop Gallium.

          Indeed, Gallium looks the same as Direct3D 10/11. There is even a comparison between the two APIs:
          http://cgit.freedesktop.org/mesa/mes...s/d3d11ddi.txt

          Comment


          • #50
            Originally posted by netkas View Post
            it already can, but - shadows doesnt work, and screen rotatin on cube - all broken, but windows wobling
            Ok, I meant (and should have written) how close is it to properly running Compiz?

            Comment


            • #51
              Originally posted by Agdr View Post
              OK, but what if you released both an open driver without the DRM, and a monolithic blob with DRM?
              Removing DRM from the open driver wouldn't make any difference. An attacker would still be able to use the open code as a guide to attacking the blob.

              Originally posted by Agdr View Post
              This would only be a problem if contracts specifically said that the source code for any part of the whole driver including DRM cannot be publicly released (which seems pretty draconian, but I guess possible).
              The contracts don't specifically require holdback of source code, but they require specific levels of robustness and immunity from attack which so far have not been achievable if source code is released.

              Put differently, the contract doesn't say that you have to stay in the armoured car but it does say that you have to survive when everyone is shooting at you

              Originally posted by Agdr View Post
              I thought the idea of nVidia or Intel moving to a VLIW architecture and taking advantage of your ideas/code was considered unrealistic, but evidently you don't, and might be right at that.
              Doesn't have to be an existing competitor. If we say "hey folks, here's everything you need to design & build your own GPU HW/SW without spending hundreds of milions on R&D" we might as well just shut the company down today. A new competitor would not be able to keep up with evolution in the high end of the market but they would easily be able to compete in the high volume portion of the market which actually pays for most of the ongoing R&D.

              Originally posted by Agdr View Post
              Yes, you would face the risk of an hypothetical slight loss of your competitive edge, which could be perhaps compensated by other advantages.
              Yep, that's the core tradeoff. The problem is that the advantages are ephemeral and hard to quantify, while the costs and risks are very real and easy to quantify. The upside is that people might like us better and we might get some nice contributions. The downside is that we would be betting the company, or at least the graphics business, to a much greater extent than we are today.

              Originally posted by Agdr View Post
              Another theoretical option would be to attempt to make an agreement with nVidia to both open your drivers, although I suppose that is very unrealistic.
              Talking with your competitors about anything these days brings an extraordinarily high risk of being hammered by government lawsuits. I don't think either of us would want to take that chance.

              Originally posted by Agdr View Post
              It's more cumbersome for you, but much less cumbersome for third parties, which means you get more contributions and more developer mindshare. Of course whether this is significant is debatable.
              Yep, and again there's a cost trade-off -- is it cheaper to open the code and get a few contributions from outside than to work with our partners and implement the same contributions ourselves ? So far the numbers work out about 100:1 in favour of not opening the code.

              Originally posted by Agdr View Post
              There is also the Linux market advantage, where a top performing open driver could possibly lead to near total dominance of the Linux enthusiast market, and workstations users might be influenced too once RHEL and other enterprise distribution start advertising better or exclusive support for ATI and Intel cards as opposed to nVidia ones (and the Intel ones are of course useless for the workstation market).
              Again, if we picked up 100% of the Linux enthusiast market it might make us enough money to cover the cost of the current open source work. Any chance of even "breaking even" would have to come from the workstation market.

              Originally posted by Agdr View Post
              Right now people seem to still buy nVidia cards for Linux use because the open drivers are not competitive and they either deem fglrx inferior to nvidia, or consider them equivalent and go nVidia for other reasons. An open fglrx might change this, and make ATI cards a must, while with the current strategy, this won't happen at least for a few years, if at all (i.e. not until Mesa supports the latest OpenGL release and performance is almost that of fglrx).
              For reasons I don't fully understand, perception of driver quality has much less impact on hardware market share for Linux consumer users than you might expect. There has been a slight change over the last few years, coincident with supporting open source driver development and making massive improvements in the proprietary driver, but all indications are that the shift in Linux market share is actually *smaller* than the shift in Windows market share. Go figure.

              Originally posted by Agdr View Post
              Obviously it's a much smaller market than Windows, but being recommended by Linux users might somewhat affect the Windows market too, and the fact that Linux is likely more prevalent among developers and GPU compute users might increase its importance.
              Yep, if it wasn't for indirect factors like this it would be really hard to justify spending any $$ on Linux support. Linux is a huge factor in the server market (which is one reason that AMD has always been supportive of open source work) but so far Linux has not shown any signs of being a substantial part of the client PC market outside of the embedded and small-footprint (tablets etc...) segments.

              Originally posted by Agdr View Post
              Anyway, I guess you already analyzed this, and decided the possible gains would be lower than the risk of reducing your competitive advantage and the cost of the effort.
              Correct, and unfortunately the numbers aren't even close. That said, the world is constantly changing and we are always looking ahead to where things might be a few years from now.

              Originally posted by Agdr View Post
              I think it would be nice to have the closed source 3D userspace optionally work with the open DRM driver, and an open X driver, enhancing them if necessary beforehand (possibly with Linux-specific code from fglrx if applicable). This should be doable (except possibly for the video DRM stuff, which I think you could just not support in this mode of operation), and would eliminate a lot of the major issues with a closed 3D driver like trouble with newer kernels and X servers, system crashes, security issues, kernel taint, and make it very easy to switch between fglrx and the Mesa/Gallium stack, and even use them side by side. It would be an unique advantage over nVidia and Nouveau and would also possibly allow you to eventually drop the kernel and X driver, and focus only on the proprietary OpenGL/OpenCL userspace.
              Yep, that was one of the first options we looked at, and still seems like one of the most attractive. The downsides are (a) we would need to refactor a big chunk of proprietary code so that the low-level fglrx code we released into the open stack would not put our proprietary DRM at risk in other OSes, and (b) the open drivers are community-controlled not AMD-controlled and so far the community is not real enthusiastic about constraining what they do with an open kernel driver in order to avoid breaking a proprietary user-space driver. We can fork the open kernel & userspace X driver code and ship our own (slightly different) version with the proprietary stack but then we lose a bunch of potential gains from having a common open driver.

              There are also a number of places where proprietary drivers over-write chunks of the common framework in order to add proprietary features, where the complexity/functionality tradeoff makes it hard for the community to justify adding that functionality to the common code. Features like Multiview and Crossfire both involve a lot of code outside the 3D driver as well as the obvious 3D changes. This will get easier over time, however -- Eyefinity does reduce the importance of Multiview, for example, and ever-increasing 3D performance may mean we can compete without Crossfire at some point in the future -- but IMO we also need to get to the point where the open source driver community is willing to live with at least a subset of the constraints that come from hosting a proprietary 3D stack on the same code, and I don't feel like we are there yet. The solution may be as simple as branching and re-merging the kernel & X driver code at the right times; it feels do-able anyways.

              The bigger task would be re-factoring, opening and merging in the proprietary low-level code from fglrx into the open drivers (basically memory management and command submission) and having that accepted by the open source driver community for use with the open drivers as well.

              Anyways, bottom line is that we are always looking at this stuff, that passage of time makes some of it easier, that we are heading in the direction you want, and that it's a lot harder than it appears at first glance

              Comment


              • #52
                Originally posted by bridgman View Post
                Removing DRM from the open driver wouldn't make any difference. An attacker would still be able to use the open code as a guide to attacking the blob.
                Yes, but given the high skill of crackers, and the fact that you can trace (and alter) the GPU command stream anyway (see renouveau and revenge), it's not obvious that it makes a significant difference (especially if the open code is just a minimal command submission/resource manager layer).

                Originally posted by bridgman View Post
                Doesn't have to be an existing competitor. If we say "hey folks, here's everything you need to design & build your own GPU HW/SW without spending hundreds of milions on R&D" we might as well just shut the company down today. A new competitor would not be able to keep up with evolution in the high end of the market but they would easily be able to compete in the high volume portion of the market which actually pays for most of the ongoing R&D.
                Yes, that could be a concern.
                It's interesting to note though that in the x86 market you don't even need a software stack, and the architecture is very well documented (including performance aspects and sometimes architecture details), yet no one manages to compete with AMD and Intel outside perhaps of very low-end markets (e.g. VIA).

                Originally posted by bridgman View Post
                For reasons I don't fully understand, perception of driver quality has much less impact on hardware market share for Linux consumer users than you might expect. There has been a slight change over the last few years, coincident with supporting open source driver development and making massive improvements in the proprietary driver, but all indications are that the shift in Linux market share is actually *smaller* than the shift in Windows market share. Go figure.
                Interesting.
                Perhaps the reason is that the perception of the drivers has actually improved much less than the drivers themselves? (due to the open drivers still being primitive and fglrx's past inferiority).

                While trivial, perhaps renaming "fglrx" to "Catalyst" and favorable independent comparisons with nVidia could help shed those old perceptions.

                Originally posted by bridgman View Post
                Yep, that was one of the first options we looked at, and still seems like one of the most attractive. The downsides are (a) we would need to refactor a big chunk of proprietary code so that the low-level fglrx code we released into the open stack would not put our proprietary DRM at risk in other OSes,
                Would you really need to release substantial code?
                Isn't the current open Radeon kernel driver already good enough at least for supporting fglrx in single GPU configurations where KMS already works?

                Or maybe you have a different and better kernel architecture (e.g. userspace vs kernel command submission) and thus would need to open that?

                Originally posted by bridgman View Post
                and (b) the open drivers are community-controlled not AMD-controlled and so far the community is not real enthusiastic about constraining what they do with an open kernel driver in order to avoid breaking a proprietary user-space driver. We can fork the open kernel & userspace X driver code and ship our own (slightly different) version with the proprietary stack but then we lose a bunch of potential gains from having a common open driver.
                Why not just adapt to the changes in the open drivers?
                The Linux kernel has an approximately 3 month release cycle, which should give time to prepare at least a Linux-only update for any ABI changes.
                Also, incompatible changes to the ABI tend to be frowned upon (see the Nouveau ABI break debate for instance).

                Originally posted by bridgman View Post
                ever-increasing 3D performance may mean we can compete without Crossfire at some point in the future
                Really? I would expect multi-GPU instead to be more prominent in the future, as software support improves and compute gets more prevalent (with "GPU SMP" systems becoming standard for compute servers).

                Originally posted by bridgman View Post
                Anyways, bottom line is that we are always looking at this stuff, that passage of time makes some of it easier, that we are heading in the direction you want, and that it's a lot harder than it appears at first glance
                Great, thanks

                Comment


                • #53
                  Originally posted by Agdr View Post
                  Yes, but given the high skill of crackers, and the fact that you can trace (and alter) the GPU command stream anyway (see renouveau and revenge), it's not obvious that it makes a significant difference (especially if the open code is just a minimal command submission/resource manager layer).
                  The problem is that we have to "stay well back from the abyss"... it's not like we can keep opening things up until something bad happens then back off a bit

                  Originally posted by Agdr View Post
                  Yes, that could be a concern.
                  It's interesting to note though that in the x86 market you don't even need a software stack, and the architecture is very well documented (including performance aspects and sometimes architecture details), yet no one manages to compete with AMD and Intel outside perhaps of very low-end markets (e.g. VIA).
                  Getting into x86 isn't that simple. Not my area to talk about though. There is a non-trivial software stack, BTW, it's just between OS and hardware rather than app and hardware. I wasn't really aware of the CPU software stack until we joined up with AMD.

                  Originally posted by Agdr View Post
                  Interesting.
                  Perhaps the reason is that the perception of the drivers has actually improved much less than the drivers themselves? (due to the open drivers still being primitive and fglrx's past inferiority). While trivial, perhaps renaming "fglrx" to "Catalyst" and favorable independent comparisons with nVidia could help shed those old perceptions.
                  It is actually called Catalyst for Linux, or something like that. I just don't like being nitpicky by correcting everyone who calls it fglrx (and fglrx is way easier to type ).

                  Originally posted by Agdr View Post
                  Would you really need to release substantial code?
                  Isn't the current open Radeon kernel driver already good enough at least for supporting fglrx in single GPU configurations where KMS already works? Or maybe you have a different and better kernel architecture (e.g. userspace vs kernel command submission) and thus would need to open that?
                  I don't have actual test results, but I suspect that the current Mesa driver over our libdrm/kernel code would be *much* faster than the current proprietary 3D driver over the open libdrm/kernel code.

                  That doesn't mean the devs don't know how to write a fast kernel driver, just that the focus right now is still on functionality & robustness rather than performance optimization. Any of the devs working on the kernel driver can rattle off a list of all the things they would like to improve given time.

                  Originally posted by Agdr View Post
                  Why not just adapt to the changes in the open drivers? The Linux kernel has an approximately 3 month release cycle, which should give time to prepare at least a Linux-only update for any ABI changes. Also, incompatible changes to the ABI tend to be frowned upon (see the Nouveau ABI break debate for instance).
                  We would have to do that, of course, but it would be all to easy to end up in a situation where kernel driver changes that make Mesa faster also make the proprietary driver slower, for example. What is "the right decision" in that scenario ?

                  Originally posted by Agdr View Post
                  Really? I would expect multi-GPU instead to be more prominent in the future, as software support improves and compute gets more prevalent (with "GPU SMP" systems becoming standard for compute servers).
                  Multi-GPU support for compute is less of a problem since the nature of the workload makes it easier for the split across multiple engines to be exposed at an application/API level. Graphics is a tougher challenge because you have to pretty much invisibly emulate a single GPU.

                  Comment


                  • #54
                    Originally posted by bridgman View Post
                    The problem is that we have to "stay well back from the abyss"... it's not like we can keep opening things up until something bad happens then back off a bit
                    Exactly. That's why we are not only not opening up XvBA but also demand of the chosen few who do have access to the XvBA sdk that any app/library they develop shall remain closed source. Sort of a reverse GPL. All in all, this doesn't score AMD points for open source friendliness.

                    Comment


                    • #55
                      Originally posted by bridgman View Post
                      It is actually called Catalyst for Linux, or something like that. I just don't like being nitpicky by correcting everyone who calls it fglrx (and fglrx is way easier to type ).
                      The kernel module is still called "fglrx.ko", the X driver is called "fglrx_drv.so" and they both print messages with the "fglrx" string.
                      It should be possible to rename to "catalyst" and keep compatibility with symlinks and module aliases.

                      Originally posted by bridgman View Post
                      We would have to do that, of course, but it would be all to easy to end up in a situation where kernel driver changes that make Mesa faster also make the proprietary driver slower, for example. What is "the right decision" in that scenario ?
                      Given that you are willing to release GPU documentation, I guess it should be possible to come to a technical agreement over what is best, or support two options if really necessary (both of which could eventually be useful to the open driver too).

                      Comment


                      • #56
                        Originally posted by monraaf View Post
                        Exactly. That's why we are not only not opening up XvBA but also demand of the chosen few who do have access to the XvBA sdk that any app/library they develop shall remain closed source. Sort of a reverse GPL. All in all, this doesn't score AMD points for open source friendliness.
                        Isn't the VA-API support through xvba-video good enough for the closed driver? (I haven't tried it personally, so I have no opinion)

                        Comment


                        • #57
                          XvBA is a bit of a different story - it was developed for a market where everything is closed anyways, so the idea of making the API public would be the last thing any of our customers would want. That doesn't do much for the consumer PC client market, of course, so that's the next thing we need to address. For now, gbeauche's VA-API to XvBA adapter is a real nice way to try out the code while we finish development.

                          We also found that there was a "middle ground" embedded market where everything looked just like a traditional embedded product but it used GPL apps, which made the idea of a closed API kind of problematic. We're hoping that the solution for consumer client PC will also work for that "not quite embedded" market.

                          Comment


                          • #58
                            So XvBA is getting addressed?

                            At least this is good to know.

                            Comment


                            • #59
                              Originally posted by pingufunkybeat View Post
                              So XvBA is getting addressed?

                              At least this is good to know.
                              By the way, the oldest available version of xvba-video has an only 40KB binary, which decompiles using the IDA Pro Hex Rays decompiler to only 5450 lines of C code containing 147 functions.

                              It even includes asserts and error messages, and both x86-64 and i386 versions are available allowing to more easily determine what is a pointer and what an integer.

                              Overall, it looks quite easy to reverse engineer to produce open documentation of the XvBA API (although possibly partial due to xvba-video possibly not using all of it).

                              Not sure however if that provides any advantage over just using it in binary form (given that one needs to rely on closed source code anyway for the XvBA implementation).

                              Comment


                              • #60
                                Originally posted by pingufunkybeat View Post
                                So XvBA is getting addressed? At least this is good to know.
                                All I can say with 100% certainty is that it hasn't been forgotten and that we're trying to push it ahead.

                                Comment

                                Working...
                                X