Announcement

Collapse
No announcement yet.

The Fallacy Behind Open-Source GPU Drivers, Documentation

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by marek View Post
    It's understandable that you might not know what "DRM" means, but "shader" is a basic term in computer graphics. I'd recommend you to first learn what a shader is by making simple demos with OpenGL and its shading language. There are lots of tutorials about it.
    My reasoning was that OpenGL and DirectX were just specific-purpose APIs on top of the driver. Since my goal wasn't learning OpenGL, but the way the driver controlled the hardware, I thought I could just skip the whole upper layer and go straight to the bottom of it. In the hindsight it was silly, as the 3D part of the driver is built with the main purpose of converting OpenGL into the low-level commands, so its structure must be aligned with OpenGL to a degree.

    OTOH, had there been a tutorial for the driver programming with clear statement of prerequisites, there'd have been no confusion on my side.

    Originally posted by marek View Post
    You don't need another machine, another video card, a test kernel, or anything like that.
    <...>
    I only need to reboot if I want to test kernel code, which I rarely need to touch.
    I'm aware you can test/debug the 3D part without hassle by just switching the libraries. However, I referred to the DDX and the kernel drivers. The hard problem I referred to was something like:
    1. You run the test OS in a VM, which somehow is able to access the graphics hardware directly (not OpenGL pass-through, but actually accessing the registers and the memory), while still being bounded to its own sandbox, so any corruption or crashes wouldn't affect the host.
    2. A tool provides you with a unified way of interacting with the whole graphics stack inside the VM, from DRM to the target 3D program. Ideally, the tool would allow you to debug all the way through the process boundaries, so e.g. you can step into the 3D app and, tracing through its call stack, end up in the kernel space.

    Now *that's* the kind of toolset I'd love to use! Maybe that is too much comfort for a driver developer, but for me, debugging is the natural way of dealing with new code. Whenever I start learning a new framework, after going through the basic manual, my favorite way of spending the first X weeks with it is stepping through its guts, repeatedly. Being able to do that with as little hassle as possible is even more important when the framework is under-documented.

    P.S. (Jun 2009, #radeonhd)
    Jun 17 08:14:34 <nfrs> how do you debug the x.org driver?
    Jun 17 08:14:49 <airlied> oh also it helps if you have 2 Pcs
    Jun 17 08:14:56 <nfrs> nope, I don't
    Jun 17 08:15:35 <yangman> it's doable with 1. just a lot of hassel, and a lot of faith that hard restarts won't corrupt your data
    Jun 17 08:21:01 <yangman> I'm turning my laptop into a dev machine so I can work on KMS. I'd rather hack on r7xx, but I need the desktop too much right now to be doing silly things on it

    Jun 17 08:27:04 <nfrs> how do you run Xorg inside gdb?
    Jun 17 08:27:19 <nfrs> I mean, technically. set-up? commands?
    Jun 17 08:28:44 <yangman> I sprinkle generous amount of ErrorF()
    Jun 17 08:28:58 <airlied> yeah I'm with yangman I normally ErrorF debug
    Jun 17 08:30:33 <yangman> for r6xx Xv, the shader part was debugged by basically rendering data
    Jun 17 08:31:10 <nfrs> rendering on screen?
    Jun 17 08:31:28 <yangman> yup. the code was already there. I was altering it

    Comment


    • #72
      Originally posted by kirillkh View Post

      I'm aware you can test/debug the 3D part without hassle by just switching the libraries. However, I referred to the DDX and the kernel drivers. The hard problem I referred to was something like:
      1. You run the test OS in a VM, which somehow is able to access the graphics hardware directly (not OpenGL pass-through, but actually accessing the registers and the memory), while still being bounded to its own sandbox, so any corruption or crashes wouldn't affect the host.
      2. A tool provides you with a unified way of interacting with the whole graphics stack inside the VM, from DRM to the target 3D program. Ideally, the tool would allow you to debug all the way through the process boundaries, so e.g. you can step into the 3D app and, tracing through its call stack, end up in the kernel space.

      Now *that's* the kind of toolset I'd love to use! Maybe that is too much comfort for a driver developer, but for me, debugging is the natural way of dealing with new code. Whenever I start learning a new framework, after going through the basic manual, my favorite way of spending the first X weeks with it is stepping through its guts, repeatedly. Being able to do that with as little hassle as possible is even more important when the framework is under-documented.
      So you want your hand-held and a pony ;-)

      Really writing low level hw drivers needs to talk direct to the hardware, yes internally AMD have large simulator farms but they aren't practical for us out here.

      If you are talking directly to the hw you can usually crash the hw, there is no way a VM or other CPU layer can do anything for you. Doing lowlevel driver development in nearly every OS involve two PCs and printfs, unless you can do it all in a VM which usually means you aren't working on real HW but VM hw and that isn't useful for graphics.
      Ever try writing a Windows device driver? or BSD or anything?

      If you want to work on the 3D bits its easier since we have gotten better HW reset features since KMS came about, but to do any major work on the low-level kernel driver you need to have a system that you can crash at will.

      It reads as if were totally correct not trying to hand-hold you as it seems you have very unreasonable expectation of what developing device drivers is.

      Dave.

      Comment


      • #73
        One of the "lower the entry barrier" ideas we were kicking around was writing a simple GPU emulator to let new developers get comfortable with the basic concepts (ring buffers, asynchronous drawing, graphics pipe / state setup, shader programs etc..) but as airlied said it's not really practical to do real-world development on something like that unless

        (a) you have a bit-accurate emulation which implies at RTL-level access to the actual hardware design, and

        (b) you have either a simulator farm or dedicated hardware accelerators to get the speed up to at least 1/1000th of real time so you don't die of old age while running tests.
        Test signature

        Comment


        • #74
          I got very used to hanging my GPU during the initial r5xx 3D bringup. It was a very common, hour-to-hour thing, and I just would sigh and reset the machine. Ditto for r300g. Just gotta grin and bear it. Lots of printfs help, but in the end most of it was just staring at what I had written and diagramming out code paths and such.

          This is some of the hardest programming out there, in terms of tediousness. Patience is essential.

          Comment


          • #75
            Originally posted by airlied View Post
            So you want your hand-held and a pony ;-)
            I want 2 of the 3 (could be fractional, but should still sum up to 2):
            1. Good documentation/guidance (interactive or not).
            2. Good code structure and modularity.
            3. No-hassle development.
            I think these are reasonable expectations for a potential FOSS programmer. I'm not sure about (2), but last time I checked, (1) and (3) did not sum up to 1.

            You're making it sound like I'm complaining, but really, I'm just explaining what drove me off the project. Take it FWIW.

            Originally posted by airlied View Post
            Doing lowlevel driver development in nearly every OS involve two PCs and printfs, unless you can do it all in a VM which usually means you aren't working on real HW but VM hw and that isn't useful for graphics.
            I was thinking of something like the way thread context switching works: each thread has its own state in the memory, and a scheduler resets the CPU state to one of these states when switching to another thread. I figure it should be doable if there is any kind of support for "complete GPU reset".

            Originally posted by airlied View Post
            Ever try writing a Windows device driver? or BSD or anything?
            Did everyone in the GPU driver teams come with prior driver writing experience? If that's the case, then no wonder you can't find more developers, as that talent pool is pretty limited.

            Originally posted by airlied View Post
            If you want to work on the 3D bits its easier since we have gotten better HW reset features since KMS came about, but to do any major work on the low-level kernel driver you need to have a system that you can crash at will.
            If that's the case, then I'm not qualified. (That's where I stopped the last time.)

            Originally posted by airlied View Post
            It reads as if were totally correct not trying to hand-hold you as it seems you have very unreasonable expectation of what developing device drivers is.
            Perhaps you're right, but that's irrelevant. The main question is - are my expectations shared by other people who failed to become contributors? And, if yes, can anything be done to make reality closer to the expectations?

            Your tongue-in-cheek reply makes me think that maybe the existing devs simply do not make serious attempts at lowering the entry barrier. Could that be correct?

            Comment


            • #76
              Originally posted by kirillkh View Post
              I was thinking of something like the way thread context switching works: each thread has its own state in the memory, and a scheduler resets the CPU state to one of these states when switching to another thread. I figure it should be doable if there is any kind of support for "complete GPU reset".
              As opposed to thread context switching, this doesn't have to happen every few nanoseconds. It would be enough if it could be triggered on demand. That way one would be able to run the debuggee in its own "context" and switch back and forth when appropriate. When the debuggee stops on a breakpoint, a reset would happen, and the host machine would restore its GPU state. When you step into or resume the VM, it should restore its own GPU state and then proceed until stopped. In case of anything nasty, one should be able to get their desktop back with a keyboard shortcut.

              Maybe I'm not the first person to think of that, but still worth a shot.

              Comment


              • #77
                Originally posted by bridgman View Post
                One of the "lower the entry barrier" ideas we were kicking around was writing a simple GPU emulator to let new developers get comfortable with the basic concepts (ring buffers, asynchronous drawing, graphics pipe / state setup, shader programs etc..) but as airlied said it's not really practical to do real-world development on something like that unless ...
                If what I described in my previous comment is impossible, then I think this would be an awesome starting point.

                Comment


                • #78
                  well, they talk to you

                  but seriously there is only so much you can lower a barrier. And some things can't be lowered

                  Comment


                  • #79
                    Originally posted by energyman View Post
                    some things can't be lowered
                    Or can they?

                    Comment


                    • #80
                      Originally posted by bridgman View Post
                      One of the "lower the entry barrier" ideas we were kicking around was writing a simple GPU emulator to let new developers get comfortable with the basic concepts (ring buffers, asynchronous drawing, graphics pipe / state setup, shader programs etc..)
                      Isn't that already available in SimNow (provided that one can ignore or creatively interpret the more ridiculous bits of the license, which seem to prohibit it from being run on any actual PC), or is the Radeon device only in non-public versions?

                      Comment

                      Working...
                      X