Announcement

Collapse
No announcement yet.

fglrx sucks...

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • fglrx sucks...

    ...at supporting the Xorg 2D/ modesetting stack, like EXA and RandR, but it is superior to anything OSS related when it comes to 3D, since it can rely on patented technology like S3TC.
    And now there is kernel based modesetting which I imagine is hard to do with closed source driver since it lives in the GPL kernel.

    So I am wondering whether it is possible to split out the 3D component of fglrx and make it compatible with MESA, so it basically is only a libGL replacement.
    This way all drivers would share the 2D stack and also get the same improvements there, while you could optionally use the faster 3D component.

    Since there are many savvy people running around here, I throw it in here.

  • #2
    Variations on this idea have been proposed many times by some very smart people on this forum...

    The answer depends on who you talk to.

    If your speaking with an open source advocate, he'll tell you that it isnt possible because the 3D driver needs direct access to the command processor. But providing that access to a closed driver would taint the kernel. ATI gets around this right now by using a small GPL wrapper that interfaces with the kernel, and then they use this to communicate with there closed code. Using your idea and some of the variations on it that configuration wont be possible.

    On the other hand if your speaking to a proponent of closed source development, they'll try to make the argument that it cannot be protected. In this kind of environment it becomes impossible to ensure the data that is being processed is protected. (aka restricted) (Think DRM) The other side of the closed source argument is that it becomes impossible to protect intellectual property.

    In the end neither side is willing to make a compromise so what we end up with is a whole lot of redundant code and a hell of a lot of wasted time, money, and talent....

    And guess whos paying for most of this waste? That's right ATi...

    Comment


    • #3
      Originally posted by duby229 View Post
      If your speaking with an open source advocate, he'll tell you that it isnt possible because the 3D driver needs direct access to the command processor. But providing that access to a closed driver would taint the kernel. ATI gets around this right now by using a small GPL wrapper that interfaces with the kernel, and then they use this to communicate with there closed code. Using your idea and some of the variations on it that configuration wont be possible.
      Actually the shim code is not GPL, it's just delivered in source form.

      The main obstacle to running a closed 3d stack over an open kernel driver is that part of the "secret sauce" for making the 3d stack run fast is memory manager code. The fglrx driver has always been primarily a workstation driver (although we are ramping up support of consumer functionality and distros now) and competitive advantage is a big deal there.

      Memory management in the open source kernel drivers is much less mature although there has been some good progress over the last year. Until open source memory management advances to the point where we can use the open APIs for memory management and still get performance leadership for workstation it doesn't make business sense to have our 3d driver run over an open kernel driver (unless we are willing to dump all of our proprietary mm code into the open driver, which we don't plan to do).

      Originally posted by duby229 View Post
      On the other hand if your speaking to a proponent of closed source development, they'll try to make the argument that it cannot be protected. In this kind of environment it becomes impossible to ensure the data that is being processed is protected. (aka restricted) (Think DRM) The other side of the closed source argument is that it becomes impossible to protect intellectual property.
      Yep. This is the 800 pound gorilla in the room. If it turns out that we need to provide a DRM solution for Linux in order to compete in the evolving market then practicaly speaking that means we need to stay with a closed kernel driver. I don't know how that is going to turn out (don't think anyone does), although it's a great topic for starting a fight in the bar after any open source conference

      This is probably the single most polarizing issue in the Linux world, in the sense that users and developers fall totally into one camp or the other with very little overlap. The one exception is the folks who want the benefits that come with increased market share but don't want to have to deal with things like DRM which (today) seem to be necessary to get that market share.

      I don't know exactly how Apple is handling this but there may be some clues there. My guess is that they are running with a closed kernel but I don't know for sure.

      Originally posted by duby229 View Post
      In the end neither side is willing to make a compromise so what we end up with is a whole lot of redundant code and a hell of a lot of wasted time, money, and talent.... And guess whos paying for most of this waste? That's right ATi...
      It's actually the same people on both sides. If you can reliably predict what will happen with DRM and Linux in the future then we can make more concrete plans today. In the meantime the discussion is academic because if we ported the closed 3d driver over the current kernel drivers performance would go done significantly and you would all be lining up with tomatoes anyways.

      The open source kernel drivers will get better with time, and the closed source driver will probably become more open with time, and I expect this will all shake out nicely. In the meantime we need the closed driver for workstation business (unless someone wants to donate a big chunk of money to make up for the loss of business there) so our only option to avoid duplication is to cut the open source effort and I don't think anyone wants to see that.
      Last edited by bridgman; 26 July 2008, 01:16 PM.
      Test signature

      Comment


      • #4
        I really dont want to get into a DRM debate, but I dont think Apple should be taken as an example. I'm of the opinion that Apple violated everything sacred and revered when they released OSX..... And the BSD retards let them do it.... I personally will never, ever again use another BSD based product ever for as long as I live, becouse clearly that group has no moral values.

        Actually the shim code is not GPL, it's just delivered in source form.
        So it is a GPL violation?

        It's actually the same people on both sides. If you can reliably predict what will happen with DRM and Linux in the future then we can make more concrete plans today. In the meantime the discussion is academic because if we ported the closed 3d driver over the current kernel drivers performance would go done significantly and you would all be lining up with tomatoes anyways.

        The open source kernel drivers will get better with time, and the closed source driver will probably become more open with time, and I expect this will all shake out nicely. In the meantime we need the closed driver for workstation business (unless someone wants to donate a big chunk of money to make up for the loss of business there) so our only option to avoid duplication is to cut the open source effort and I don't think anyone wants to see that.
        You keep claiming that you'd lose business, but you've got no evidence to back up that claim. I'll argue that you'd significantly increase business due to the increase in stability, and uptime.

        I've already said this, but I'll rehash it here... Intel is innovating in the linux graphics market... Not you.... Your superior hardware is already playing second fiddle. Intel is the one that is innovating GEM, not you. Intel is the one pioneering KMS, not you. Intel is the one contributing most to DRI2, not you. And the list goes on and on and on.

        If you actually wanted to compete, you'd drop your closed driver, you'd tell Novell to fuck off, and you'd put 100% effort into developing a cohesive and well developed ecosystem for open source development. Right now you dont have that and it doesnt look like you ever will.

        I'm glad that your developing open source drivers, but they are built on top of Intels innovations for there inferior hardware. As such your drivers will never be as good as they should be. And it really is truly a shame.

        Comment


        • #5
          Originally posted by duby229 View Post
          I really dont want to get into a DRM debate, but I dont think Apple should be taken as an example. I'm of the opinion that Apple violated everything sacred and revered when they released OSX..... And the BSD retards let them do it.... I personally will never, ever again use another BSD based product ever for as long as I live, becouse clearly that group has no moral values.
          OK, so maybe looking to Apple was a bad idea. Noted

          Originally posted by duby229 View Post
          So it is a GPL violation?
          No -- a GPL violation would be taking GPL code and either publishing it under a less free license or using it to build binaries and not providing the source code. There is no legal problem AFAIK with having non-GPL code in the kernel as long as you don't claim to be GPL if you are not. If the code does not identify itself as GPL then the kernel devs are free to restrict access to certain functions deemed as "internal" but so far that seems to have been managed pretty fairly.

          Not sure if there is any non-GPL code pushed into the kernel tree itself or if it is all in loadable modules; one more thing for me to learn ;(

          Originally posted by duby229 View Post
          You keep claiming that you'd lose business, but you've got no evidence to back up that claim. I'll argue that you'd significantly increase business due to the increase in stability, and uptime.
          If we didn't also lose a big chunk of performance at the same time that is possible, but I don't see customers paying the same price for something that runs 20-30% slower. If you follow the IRC chatter (today in radeon is a good example) you'll see some clear signs that the current open source driver is not the foundation we want to be building on.

          Originally posted by duby229 View Post
          I've already said this, but I'll rehash it here... Intel is innovating in the linux graphics market... Not you.... Your superior hardware is already playing second fiddle. Intel is the one that is innovating GEM, not you. Intel is the one pioneering KMS, not you. Intel is the one contributing most to DRI2, not you. And the list goes on and on and on.
          Sorry, isn't DRI2 more of a Red Hat initiative (Kristian H) ?

          http://hoegsberg.blogspot.com/

          It's also RH people doing most of the kernel modesetting work AFAIK.

          http://airlied.livejournal.com/61839.html

          If you follow the ML and IRC discussions GEM is seeming to be too specific to Intel HW to be a good solution for our parts, so Dave (RH) is working on a combination of GEM and TTM (Tungsten) APIs for ATI/AMD graphics. I think this is mentioned in the previous link.

          Keith has been pioneering things in X for a long time and hopefully he will keep doing so. He did it at HP, at SuSE, and now he is doing it at Intel. That said, the innovations come from a lot of different companies not just Intel. I don't want to sound like I'm downplaying Intel's contribution here -- they have done a lot of good things for open source -- but if you dig deeper you may find that you are crediting Intel for work and leadership done by other people, including both independent developers and the teams at Tungsten, Red Hat and Novell.

          Originally posted by duby229 View Post
          If you actually wanted to compete, you'd drop your closed driver, you'd tell Novell to fuck off, and you'd put 100% effort into developing a cohesive and well developed ecosystem for open source development. Right now you dont have that and it doesnt look like you ever will.
          Again, if someone wants to put up the money in case we lose workstation business the way we everyone expects, we can talk. The open source kernel drivers are simply not ready for commercial workstation use today.

          Originally posted by duby229 View Post
          I'm glad that your developing open source drivers, but they are built on top of Intels innovations for there inferior hardware. As such your drivers will never be as good as they should be. And it really is truly a shame.
          I think we covered this above. The open source community is driven by innovations from a number of people and companies, not just Intel. Dig a little deeper, OK ?
          Last edited by bridgman; 26 July 2008, 02:16 PM.
          Test signature

          Comment


          • #6
            What would it mean for AMD to put their proprietary memory manager into the kernel tree and GPL it? Are you afraid that others would steal it? The GPL does not allow them to do that. If they would rip the memory manager they would have to open up their code too.

            I'll just quote Greg Kroah-Hartman here:

            The very good side effects of having your driver in the main kernel tree are:
            • The quality of the driver will rise as the maintenance costs (to the original developer) will decrease.
            • Other developers will add features to your driver.
            • Other people will find and fix bugs in your driver.
            • Other people will find tuning opportunities in your driver.
            • Other people will update the driver for you when external interface changes require it.
            • The driver automatically gets shipped in all Linux distributions without having to ask the distros to add it.

            As Linux supports a larger number of different devices "out of the box" than any other operating system, and it supports these devices on more different processor architectures than any other operating system, this proven type of development model must be doing something right
            From what you said till now, it seems that AMD does not want the open source devs to come up with a high-performance memory manager because that would cost you the competitive edge. They want a low-performing MM in the kernel while only their closed MM is fast. The conclusion is that for AMD the open source devs are an enemy.

            A high-performing, in-tree MM will eventually happen. So why not go ahead right now and put the code in there anyway.

            Comment


            • #7
              @bridgman:
              still intel is showing the way in the respect, that they started the work on a high-performance memory manager, while AMD is still too afraid of their competitive advantage. This argument is basically as absurd as in the opening the specs discussion.

              its sad to hear that there actually is no technical reason for such a merge. but obviously the mind change for a fully open source strategy takes a while.

              Comment


              • #8
                Hello again. Sorry for the late response guys... I had an issue to take care of...

                If we didn't also lose a big chunk of performance at the same time that is possible, but I don't see customers paying the same price for something that runs 20-30% slower. If you follow the IRC chatter (today in radeon is a good example) you'll see some clear signs that the current open source driver is not the foundation we want to be building on.
                And it is entirely ATi's own fault. They'll never get the foundation that is right to build on if they dont allocate the required resources to make it. Instead you've got your resources split between just a handfull of guys that they have on there own payroll, a few guys at Redhat and a few guys at Novell.. I'm of the opinion that you guys need to tell Novell to suck on an orange, and then hire the guys that they get rid of. You can deny it, but I know for 100% fact that your investing a huge sum of resources providing Novell with both money and documentation. Reallocate that waste into something more productive. Novell over the last year has already proven that they dont have what it takes. Any further investment is a total waste of time and money. Drop them now while you still can.

                I understand that mode setting will be in the kernel soon so that code will be moved out of the DDX driver. The only other thing the DDX needs to worry about is 2D and Video, which I understand the 2D acceleration code is pretty solid in radeon. Video needs to be rewritten but that is strictly 100% due to ATi wasting even more time and money on DRM. Which is such a massive waste that there is no other rival. If ATi had developed proper video hardware from the beginning there wouldnt be any problem with video acceleration.

                Then the mesa driver, which I understand still needs alot of work, but this is exactly why you need to get on the ball and allocate the resources needed to get it up to par. If you had started it at full speed this time last year instead of dragging your feet and wasting time with Novell, and wasting even more time porting your new propriatary code base from windows, and then even more time and money trying to work around your flawed DRM laden video hardware,it would already be done.

                At this point ATi is totally screwed once Intel gets a top end competitor.

                Comment


                • #9
                  I agree. ATI seems to be investing their resources unwisely. Novell's radeonhd project, which uses devs paid by ATI, has so far been slow and behind the curve because of their insistence on programming registers directly. Radeon, with Alex Deucher and David Airlie, has done a much better job of supporting the new hardware and adding cool features (DRI2, KMS, EXA perf, 3D, etc.) even though they're not being paid by ATI (afaik).

                  That begs the question - why not just scrap the radeonhd project and focus all the devs' energies on radeon?

                  As for an improved MM, Keith Packard and GEM are leading the way for Intel in terms of beefing up perf. Maybe the radeon guys can get their driver using GEM and/or TTM soon. Also, Intel is doing work on underlying improvements in Mesa. 7.1 should be out soon and that has a bunch of cool features and hopefully perf improvements planned. The work on Mesa will spill over and benefit all drivers.
                  Last edited by crumja; 26 July 2008, 10:41 PM.

                  Comment


                  • #10
                    Originally posted by crumja View Post
                    with Alex Deucher and David Airlie, has done a much better job
                    I agree they are doing a much better job... I think that Alex is payed by redhat with funding from ATi, and Dave is payed directly by ATi as an AMD employee.

                    So in addition to that they have a group at Novell that is also being paid by AT, and the folks that work directly for ATi writing the linux code for the closed driver that isnt covered by the common code. In addition they are still paying developers to port as much of the windows driver as possible to the common code base. They still have to do a bunch of video acceleration, and crossfire, and some 2d acceleration. And by the looks of how unstable fglrx is, probably a lot more...

                    I'm not sure what the exact number is, but I'd take a guess that ATi is paying somewhere round about 30 full time employees, working on these various projects and 26 of them are entirely wasting there talent. And all of them are highly skilled. Not many people are up on Linux graphics and driver development. So you know these guys are making a good buck.
                    Last edited by duby229; 26 July 2008, 11:20 PM.

                    Comment

                    Working...
                    X