Announcement

Collapse
No announcement yet.

The Fallacy Behind Open-Source GPU Drivers, Documentation

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by energyman View Post
    my problem with you, Mr James, is, that you are filling my inbox with reply notifications while your postings are not even worth the time deleting them. That is my problem. I spent time to read rubbish.
    Well I'm sorry you feel that way. I'll stop filling your inbox as well as others.

    Comment


    • #62
      Originally posted by Svartalf View Post
      Got it in one.

      The reason there's a shortage of devs doing the work is that it's NOT easy to do (I know, I've done it as FOSS developer with the previous stack, Utah-GLX, and as a paid engineer on a closed driver...) and in many cases the people capable of doing the work are off doing other things (Want to subsidize me? If I got paid as much as I do right at the moment for the consulting work I do elsewhere, I'd drop it all and do the driver work- as much because it needs doing and I understand what needs to be done and could do it. In fact, I was about ready to do it part-time for an embedded systems customer anyway and they opted to use someone more local to them so the dev could be in office occasionally...)

      It does NOT help that the information is not available.

      It's not a fallacy- it's just not as simple a thing as many make it out to be getting technical info (it used to be...not with shaders, though...it's become quite a bit more complex...) or source code (Which still really, really requires technical info... <*gives VIA a nasty look*>). It's not a case of the emperor has no clothes here. The only successful situation so far has been with Intel and AMD where they provide info, source in many cases, and some assistance as we get up to speed.
      This.

      Out of all of the big committers to the stack, only a very small number *stay* amateurs after a couple months; people are definitely willing to pay for these kinds of high-quality coders. (I think I'm the only one that turned down job interviews and offers for X in favor of school.) Most of us can't be committed to a chunk of code for more than a couple months, for sanity reasons, not business reasons.

      The size of the community *is* constantly growing, but the problem is that the larger Linux community is growing faster, and with this faster growth comes bigger expectations. Linux covers hundreds of different devices, many with tiny drivers that are only a file or two of C. It's really not surprising that naive members of the community might expect similar amounts of effort are required for GPU drivers, especially since a couple GPUs from the Good Old Days *are* that simple.

      Comment


      • #63
        From a high-level RedHat point-of-view we aren't interesting in gaming level drivers. Being as fast or as good as fglrx or nvidia isn't that interesting to our management that allows us to spend a lot of time on these projects. It doesn't mean we can't spend time on it, it just means we can get preempted with more important things. So it ends up more as a personal goal to get competitive speeds.

        Our focus is to produce an open source X.org stack that can run compiz/gnome-shell to an acceptable level across as much shipping hardware as possible out of the box on Fedora and RHEL. We've been working towards a composited Linux desktop for at least 8-9 years now and that has always been our main focus of all work done in RH from AIGLX to KMS.

        So the thing is there is nothing stopping another group or part of the community from taking up the other areas, like Marek has done to improve r300g so much and Christian's work on XvMC. Like really someone doing a VDPAU state tracker would receive a lot of glory and praise and would probably have a job a month later.

        As for documentation, its just hard to know what docs would provide the best bang for the investment in them. When we had no GPU docs, people used to come into IRC and say if they had docs they could do everything, they believed the docs were like a paint-by-numbers driver writing guide, when they actually got to see what docs existed they ran screaming away incompetents.

        Also people seem to think they need to understand the whole stack to do anything at all, I personally still don't understand large parts of the X server and Mesa but once you realise you don't need to understand everything then working on stuff is a lot easier to get started.

        Comment


        • #64
          Originally posted by MostAwesomeDude View Post
          This.

          Out of all of the big committers to the stack, only a very small number *stay* amateurs after a couple months
          It takes way more than a couple months to be more than an amateur graphics developer. The rockstars working on the proprietary drivers for AMD, NVIDIA, Intel, XGI, etc. have years and years of full-time experience working on those drivers.

          Those kinds of people can never ever contribute to the FOSS drivers as volunteers either due to a combination of corporate NDAs and the FOSS projects' requirement for clean-room implementation.

          New people coming into those teams can ramp up their skillset much quicker, too, on account of both being full-time and having dozens and dozens of other driver developers to talk to and learn from, rather than just a pile of docs and a handful of other devs who are mildly experienced at graphics driver development at best.

          This isn't something schools can fix or that the community can fix. The experience exists solely within a few walled gardens. The corporations have to fix it by letting some of that experience come to our side of the fence.

          Comment


          • #65
            Originally posted by baryluk View Post
            Reasons are simpler. GPU specifications were published by ATI too late, currently GPUs are really complicated. If they had published this specs say 10 years ago, and added next specs are new products was released, it could be much more easier to have complete and and up-to-date drivers.
            You are assuming incremental improvements over the 10 years. For ATI, the R300, R500 and Evergreen products are effectively completely different products. Within each new product family, a majority of the hardware blocks are changed radically. Some stay the same (like the display block up until R500), the 3D block changes almost every cycle.

            The dirty little secret is that each new generation of hardware needs a 40-70% new driver. All this in a 9-12 month product cycle means lots of effort-years to cram into that window (note that silicon comes back within that window too. So you usually have 6-9 months of time with the silicon before you have to ship.

            How those effort requirements fit within the Open Source world is very hard. It comes down either to reducing the hardware supported, or reducing the features supported. This is completely ignoring any upstream changes to ABI's driver architecture and so on.

            Comment


            • #66
              Originally posted by mtippett View Post
              You are assuming incremental improvements over the 10 years. For ATI, the R300, R500 and Evergreen products are effectively completely different products. Within each new product family, a majority of the hardware blocks are changed radically. Some stay the same (like the display block up until R500), the 3D block changes almost every cycle.

              <*the rest snipped for brevity...*>
              Thank you for the commentary there, couldn't have put it better myself.

              Comment


              • #67
                Originally posted by elanthis View Post
                This isn't something schools can fix or that the community can fix. The experience exists solely within a few walled gardens. The corporations have to fix it by letting some of that experience come to our side of the fence.
                What do you think AMD has been doing now for a while?

                Comment


                • #68
                  Originally posted by libv View Post
                  ok.

                  The current barrier for getting new and clued developers is in my view not the limited mentoring and limited documentation. Lack of structure and modularity are the biggest stumbling block.
                  While I definitely agree with your point, I think that lowering the entry barrier is still a must. I actually made a couple of attempts of getting started with radeon development.

                  It is a bit funny that mentoring is brought up here, because on my first attempt (2 or 3 years ago) I explicitly offered my help with the Radeon driver in exchange for some mentoring on the IRC, but was told that no one had time to do hand holding. While I have no complaints about that, let's just get it straight: no mentoring is going on (unless things have changed in the meantime). Maybe people are more willing to help when you prove your commitment by reaching a more advanced level, but, naturally, most people can't make it that far.

                  On my second attempt (a year later), I tried to just delve into the source code using what documentation was available and asking specific questions on the IRC. What stopped me that time was
                  1) Lack of a basic, assume-nothing, intro that would give a broad introduction. I recall having a hard time to even find definitions of common terms, such as "DRM", "shader", etc.
                  2) What intro documentation was available, was highly partitioned, often outdated, usually not self-contained (unclear terminology).
                  3) What AMD-provided documentation was available, was completely unintelligible with my level of knowledge.
                  4) The structure of the code and layers of abstraction were unclear. What lives in the kernel space, what lives in the user space and why? What is shared across all drivers, across all GPU drivers, across all Radeon drivers, etc, and what is distinct? How is external software like Mesa hooked in? (I'm just showing the kind of questions that a noob is puzzled with, not asking for answers in this thread.)
                  5) Complete lack of development tools. I gave up upon realizing that, in order to do development, you either use a second machine, or reboot into a test kernel. Well, I don't have a second machine, and neither a second video card. Ideally, I'd like to just lunch up some editor and get going, without having to leave the desktop. This might be a hard problem, but I suspect it can be solved. Also, IIRC, I was told that the only way of debugging was by printing. Call me spoiled there, but I can't imagine productive development without a debugger.

                  Comment


                  • #69
                    Originally posted by kirillkh View Post
                    While I definitely agree with your point,
                    I couldn't really disagree more with libv's point. However to make him little happier, xf86-video-ati will probably become obsolete as well once DDX-over-3D works reliably in Gallium.

                    Originally posted by kirillkh View Post
                    I recall having a hard time to even find definitions of common terms, such as "DRM", "shader", etc.
                    It's understandable that you might not know what "DRM" means, but "shader" is a basic term in computer graphics. I'd recommend you to first learn what a shader is by making simple demos with OpenGL and its shading language. There are lots of tutorials about it.

                    Originally posted by kirillkh View Post
                    5) Complete lack of development tools. I gave up upon realizing that, in order to do development, you either use a second machine, or reboot into a test kernel. Well, I don't have a second machine, and neither a second video card. Ideally, I'd like to just lunch up some editor and get going, without having to leave the desktop. This might be a hard problem, but I suspect it can be solved.
                    You don't need another machine, another video card, a test kernel, or anything like that. It works exactly as you said. I just launch an editor, write code, compile it, run a game to test it (or do a quick piglit run), and push the code upstream. It's like a classic application development. I don't have to install drivers to be able to use them, because there are some symlinks for the drivers and some libs in my /usr/lib that are pointing to my Mesa tree. I only need to reboot if I want to test kernel code, which I rarely need to touch. And also get some real IDE with code indexing, browsing, and refactoring capabilities to code fast. I use QtCreator, which has all these features, for both userspace and kernel development. It can import an arbitrary Makefile-based project.

                    Originally posted by kirillkh View Post
                    Also, IIRC, I was told that the only way of debugging was by printing. Call me spoiled there, but I can't imagine productive development without a debugger.
                    You can use any debugger you want. I often use kdbg, a graphical front-end to gdb.

                    Comment


                    • #70
                      Originally posted by kirillkh View Post
                      It is a bit funny that mentoring is brought up here, because on my first attempt (2 or 3 years ago) I explicitly offered my help with the Radeon driver in exchange for some mentoring on the IRC, but was told that no one had time to do hand holding. While I have no complaints about that, let's just get it straight: no mentoring is going on (unless things have changed in the meantime). Maybe people are more willing to help when you prove your commitment by reaching a more advanced level, but, naturally, most people can't make it that far.
                      I do believe things have changed relative to a couple of years ago. Back then it didn't make a lot of sense to spend time talking about the old architecture (since it was in the process of being replaced) and the details of the new architecture were still being worked out. The code was particularly complicated at the time because the userspace drivers needed to be able to work on both UMS/DRI1 and KMS/DRI2 kernel drivers.

                      Now that many of the drivers have moved over to KMS/DRI2 it's easier to explain the parts of the code that developers need to worry about, even if there is still some legacy code in place to support fallback operation with UMS, and that has made it practical for experienced developers to help new / potential developers without making their heads explode on the first day.

                      Originally posted by kirillkh View Post
                      On my second attempt (a year later), I tried to just delve into the source code using what documentation was available and asking specific questions on the IRC. What stopped me that time was
                      1) Lack of a basic, assume-nothing, intro that would give a broad introduction. I recall having a hard time to even find definitions of common terms, such as "DRM", "shader", etc.
                      2) What intro documentation was available, was highly partitioned, often outdated, usually not self-contained (unclear terminology).
                      3) What AMD-provided documentation was available, was completely unintelligible with my level of knowledge.
                      4) The structure of the code and layers of abstraction were unclear. What lives in the kernel space, what lives in the user space and why? What is shared across all drivers, across all GPU drivers, across all Radeon drivers, etc, and what is distinct? How is external software like Mesa hooked in? (I'm just showing the kind of questions that a noob is puzzled with, not asking for answers in this thread.)
                      Now that the architecture has settled down it probably is a good time to write some top-level documentation. I put together a few slides for an internal presentation, mostly explaining how GPUs worked for someone familiar with CPUs and partly providing a high level view of the current graphics driver stack (at least for Radeon graphics) -- I'll see if some of that could be cleaned up and turned into "you are here"-type documentation for new potential developers.

                      Marek's advice to go and learn a bit of OpenGL first is really important. GPUs are not designed directly around the OpenGL pipeline to the same extent they were 10 years ago, but they are still a lot easier to understand (and test ) if you can write some simple graphics programs. I remember going through a few "WTF are *those* registers for ??" moments in the 5xx days (since my background had been more on Mac and DirectX at the time) but as soon as I learned the corresponding OpenGL functions everything made sense.
                      Test signature

                      Comment

                      Working...
                      X