Announcement

Collapse
No announcement yet.

Reasons Mesa 9.1 Is Still Disappointing For End-Users

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by duby229 View Post
    You write good code. There is no doubt about that... But the usefullness of some of your past projects is questionable. Why waste time writing good code that few or no people use? Your experience and skill is obvious.... But you use it on projects that don't matter.... When you bagan modularizing mesa it was excellent code but it had no chance of ever being adopted. When you implemented radeonhd and insisted on banging the modesetting hardware directly, you should have just used atombios from the beginning.
    There are many reasons why AtomBIOS was not used.
    * We only got documentation for it in late 2008, the amount of bringup work needed with or without AtomBIOS would've been the same, as we would have to find out how things fit together anyway.
    * it's a bios, with all bugs and bad interfaces included.
    * ATI still does not allow AtomBIOS to be flashed or fixed by users, 5 years on. The nouveau guys are providing their own FuC, but ATI has successfully managed to keep the AMD open source users dumb (this was ATIs game all along: silence the big ATI hatred, but keep fglrx for all serious users).
    * AtomBIOS is then just another layer in between, another layer forcing us to be bug compatible with the Windows driver.
    * AtomBIOS was promised to be ASIC specific, not board specific. This of course got thrown out of the window by the shortsighted ATI AtomBIOS/Windows driver developers for R600 already.
    * AtomBIOS hides bad hw design. Like shifting half a register bank up 1 register from one generation to the other...
    * AtomBIOS was promised to be a stable, but powerful, modesetting interface. It ended up seeing rather a lot of changes with every generation, and these changes could've just as easily been handled in proper C code, without any of the other issues.
    * Bugs in the modesetting implementation could get fixed on all affected generations when found, not just only on the newest devices with the newest AtomBIOS versions (display unsync was broken from r500 until rv630 for instance).
    * Shortsightedness in the AtomBIOS specification caused a massive standard breakage. Namely, the DDWG DVI standard has a hard requirement for a hotplug pin, but this was overlooked for R500 AtomBIOS, even though the hardware had it. This in turn meant that hotplug was never properly tested by ATI or board makers for all R500 generations. We worked around that, by guessing the pin order (which got us 90% correct usage), board quirk data (which got us to 98% correct usage and some boards with hotplug disabled), and this information was gathered easily by our vast user base in september through november 2007.
    * AtomBIOS hid only the easy bits of modesetting. The tough bits we had to do ourselves, and ATI never provided us with relevant information there. Things like figuring out how everything fit together (for which ATI never provided information), getting stable dotclocks (which the radeon driver still hasn't managed today -- as they shortsightedly deleted the magic few lines to do so -- a magic few lines which represent several weeks of libv busywork), etc.
    * AtomBIOS is a slightly higher level interface, one that changes with every generation too, and it gets in the way of new modesetting infrastructure that might in future come up. It makes it needlessly harder to abstract the hardware in a way that best matches the infrastructure.
    * AtomBIOS is the bottom end of a "one source tree per ASIC version, freshly copied every time" graphics driver development mindset. Quite perpendicular to open source driver development.

    And all of this on top of more obvious open source advantages, which i am not going to list again for the sake of... well. Brevity.

    Did you really think that i did not think this one through? Did you really think that we made that decision purely on free-software-zealotry? Or could it be that we spent a lot of time discussing this, and gathering the pros and cons, and instead of going for the politically more acceptable solution (for ATI, but ATI always searched for new sticks to bash us with), went for the technically superior and long term maintainable option.

    ATI got forced by AMD to accept the SuSE proposal. ATI got forced to hand us documentation. We got the 2 500page raw register descriptions out of ATI, and some bad bad AtomBIOS parser code. That was it. We had to write a disassembler for AtomBIOS to find out how things fit together anyway, and i know for a fact that writing C code for the basic modesetting was the same amount of work as using the AtomBIOS interfaces directly. My personal belief is that we were even faster because of that. Not using AtomBIOS allowed us to beat the political deadline that was secretly set on us delivering. Not using AtomBIOS meant that we shipped a driver, and that ATI could not stop us, and that we got a free driver, at all.

    Heck. Hours before the first code release, ATI refused to give us clearance on the atombios parser. A last ditch attempt to stop us from shipping this driver. Our answer: "fine, then we ship without the atombios parser, we do not need it for 98% of the use cases". Minutes before we intended to push out the code, ATI then allowed us to change the license on the atombios parser.

    Need any more, or have you had enough already?

    Comment


    • #32
      Originally posted by smitty3268 View Post
      Yes, I think we all know who you are and what you've done. You are quite vocal about it at times.

      It also seems like most of your peers (other Mesa/X developers) tend to disagree with you about a lot of what you say. Just because you've been involved in a lot of these projects doesn't mean you are automatically right about everything even tangentially related.
      Modesetting is the best example of how i tick. Few people cared about it, and those who did, did not believe that my views were correct. Until a few years later...

      I then ended up working for a company which delivered/delivers an enterprise linux desktop, with long term support. Here i got to see the pain of some upstream modes of working. Those modes of working are aimed at making it easier on the developer, but not on the user, and this is what held us back on gaining more marketshare on the now long dead desktop. Heck, some vendors tend to spend most of their time trying to shoot down others instead of delivering a stable, maintainable, and useful operating system. This insured a growing marketshare, but of a shrinking linux desktop market, by pushing out other vendors, and not a growing marketshare for the linux desktop on the overall desktop market.

      So yeah, i do look at things differently. And many other developers either don't care, or see more work for themselves in future, and that is before politics kicks in.

      (edit: added
      And about me being vocal. That is something i learned from my modesetting history. If only i had been more vocal 10 years ago, quite a few things would've been implemented a lot better. 7 years after randr1.2, and we seem to be finally getting there, as thoroughbred graphics driver developers like Ville Syrjala now are in a position to fundamentally change things. In the meantime i have moved on to ARM GPUs, and the last relevant modesetting work was me bringing up RadeonHD with a solid modesetting model which mapped the hw correctly (and which also applied to the unichrome and tseng hw i had dealt with), and this is now more than 5 years ago. I am not going to make that main mistake from my modesetting work again, i will not be silenced again.
      Last edited by libv; 24 February 2013, 08:25 AM.

      Comment


      • #33
        Originally posted by libv View Post
        WRONG! As i have proven with Q3A, there is no real reason for this to be so. The lima driver will have performance which matches the binary driver.
        You're seriously trying to compare the amount of time/resources the major GPU vendors put into their Windows (and thus, Linux binary drivers) with the time/resources of ARM Mali? That's like comparing apples to... (I don't know, can't think of any relevant comparison to illustrate how ludicrous your comparison is).

        Comment


        • #34
          Originally posted by DanL View Post
          You're seriously trying to compare the amount of time/resources the major GPU vendors put into their Windows (and thus, Linux binary drivers) with the time/resources of ARM Mali? That's like comparing apples to... (I don't know, can't think of any relevant comparison to illustrate how ludicrous your comparison is).
          I believe that i have explained why the arm mali-400 is performing that nicely with my limare driver, several times over. You seem to have not bothered to read up on things.

          Comment


          • #35
            Originally posted by libv View Post
            I believe that i have explained why the arm mali-400 is performing that nicely with my limare driver, several times over. You seem to have not bothered to read up on things.
            It's performing nicely because no one has put the time/resources into the chip to optimize the hell out of it (performance-wise) the way AMD and Nvidia do with their drivers. I'm not trying to take away from your work, but you can't compare an ARM GPU to AMD/Nvidia drivers, and if you do, then you're swallowing your own bullsh!t. How many FPS does the ARM hardware run Unigine Heaven at?

            Comment


            • #36
              Originally posted by DanL View Post
              It's performing nicely because no one has put the time/resources into the chip to optimize the hell out of it (performance-wise) the way AMD and Nvidia do with their drivers. I'm not trying to take away from your work, but you can't compare an ARM GPU to AMD/Nvidia drivers, and if you do, then you're swallowing your own bullsh!t. How many FPS does the ARM hardware run Unigine Heaven at?
              You still haven't read up.

              Comment


              • #37
                Originally posted by libv View Post
                You still haven't read up.
                Hi Luc

                Your mali driver is using Mesa as OpenGL state tracker, right? From my experience with some decent desktop GPUs(AMD) Mesa is the limiting factor. There is too much time wasted processing the calls, managing buffers and whatever other low level stuff. I think this is what @DanL meant with his comparison of Mali vs NV/AMD. Mali just doesn't have the raw power to run into this limits. I am not a drivers guy, so please correct me if I am wrong.

                Comment


                • #38
                  Originally posted by log0 View Post
                  Hi Luc

                  Your mali driver is using Mesa as OpenGL state tracker, right? From my experience with some decent desktop GPUs(AMD) Mesa is the limiting factor. There is too much time wasted processing the calls, managing buffers and whatever other low level stuff. I think this is what @DanL meant with his comparison of Mali vs NV/AMD. Mali just doesn't have the raw power to run into this limits. I am not a drivers guy, so please correct me if I am wrong.
                  True or not that still mean that libv is able to make better gpu driver using STANDARD Linux compontents.

                  So we have one company which from POLITICAL reasons avoid some good solution, and them from ECONOMICAL reasons can not make anything better.

                  Mesa may have (comparatievly) big CPU overhead, but its still better than Mali offering.

                  ARM is not small or poor, or desperatly fighting to keep marketplace. Pitty on them that they can not make proper gpu driver. More pity for not joining flos efforts.

                  Comment

                  Working...
                  X