Announcement

Collapse
No announcement yet.

The Issues With The Linux Kernel DRM, Continued

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • The Issues With The Linux Kernel DRM, Continued

    Phoronix: The Issues With The Linux Kernel DRM, Continued

    Yesterday Linus voiced his anger towards DRM, once again. But not the kind of DRM that is commonly criticized, Digital Rights Management, but rather the Linux kernel's Direct Rendering Manager. With the Linux 2.6.39 kernel it's been another time when Linus has been less than happy with the pull request for this sub-system that handles the open-source graphics drivers. Changes are needed...

    http://www.phoronix.com/vr.php?view=OTI0OQ

  • #2
    IIRC a radeonhd developer was proposing to merge the components of the open source drivers some time ago but nobody cared about it...

    Comment


    • #3
      I feel sorry for those GPU developers. That stuff is a nightmare.

      Comment


      • #4
        It seems like it would have been better for DRM developers to stay outside of kernel at least until DRM would really become stable.

        Comment


        • #5
          Jerome's reply essentially mirrors my commentary the last time around - this is a symptom of forcing a one-size-fits-all development cycle upon a group of projects with fundamentally different requirements. What's being realized in the web browser wars is similar. Developing a category of software in which the best available implementations cover only a fraction of the spec isn't even the same sport as maintenance of a mostly complete implementation.

          The vanilla kernel isn't exactly a high-assurance system built following formal methods to standards of provable correctness. Small pieces of it might be - by downstream users putting Linux on their ICBM guidance systems, but not every last component of every driver. They need to work together to discover an approach with variable flexibility for variable requirements.

          Comment


          • #6
            Don't we (and the few DRM developers) already have enough problems/work with the graphics stack that we don't really need Linus being a diva permanently bitching about code being two weeks late?

            Comment


            • #7
              The argument about DRM stuff taking too much time to get to the end user seems kind of bogus. I mean, nothing is stopping them from releasing their development code before it lands into Linus' tree, if some distribution wants it sooner. Distribute an out of tree driver with the current fixes to the distros that want it and then, at the next merge window, you merge it upstream.

              Comment


              • #8
                i hope that this discussion will lead the devs to a solution for the development model. The manpower issue will probably remain.

                Comment


                • #9
                  Don't care. I'd rather the stuff work 1/2 way than not at all. I ended up with one of my nvidia gpu's messed up (the g98 gpu 8400gs) because it went in and was trying to do hardware acceleration and not using the gpu right. When I switched to a completely untested unready radeon 5550 it worked again. It uses the software rasterizer and is a step back but at least I don't have to reboot, switch back to the onboard gpu, reboot again. Unplug the monitor from add in card to on board gpu. Power back on and get back into linux.
                  I think I ran 3 x servers and 3 different open source video drivers during the Fedora 10 cycle. One of the x servers screwed up so bad I had to switch back to text mode run 3 and do a yum downgrade on it. This was like 3 months after release of fedora 10. Now they won't do that junk any more. You can't get them to try a new video driver or x-server more than 1 or 2 and fixes during alpha and beta. The new guy running the feodra project sucks compared to the guy they dumped last year.

                  I'd play more with arch linux if I wasn't enjoying my little 5550 maxing out graphics on games. I love that frikkin card. I never even considered HIS until I tried this card.
                  Quit trying to safe up the place. Either shovel us the untested crap or put us all on safe boring no drama stable stuff. I got really dissenchanted during the 13 cycle when they stopped shoving us 2 kernels and 3 x-windows. Which leaves kernel 2.6.36 and .37 without anybody testing anything on it. Because it came out mid releases. So of course .38 and .39 are going to be full of untested crap. At least Ubuntu is going to work over .37 I think.

                  Comment


                  • #10
                    I think Linus may be getting a little too far away from his hacker roots here. So what if you merge fresh code into mainline, I don't think it's intended to be stable in the same sense as something RedHat would push to RHEL customers anyhow. It's gonna be even harder to attract developers if you run things all formally like a company - I think a lot of volunteer coders like hacking on Linux in the evenings so they can push code out the door and escape that kinda repressive crap from their day job.

                    Comment


                    • #11
                      Maybe it's time for a decent version tracking of drm api.

                      But now on a more fundamental issue:
                      seriously, haven't thought about multiple screens (more than 2)
                      problems with multiple cards
                      problems with multiple OpenGL contexts.

                      Looks like nobody tries to think first a little.
                      Just codes stuff he/she thinks is useful for him/her at that particular moment.

                      Seriously guys, do some decent planning and brainstorming on a wiki first.
                      Look at windows api's. Ask game developers to comment on your patches.
                      Ask random programmers to comment on your patches.

                      Look at ALSA, there are certain parallells between graphics and audio cards.
                      e.g. memory management
                      audio cards also can have dedicated memory.

                      Up to the most constructive attitude in the whole post:
                      DRM have been trying to play catchup for years, GPU are likely the most complex piece of hardware you can find in a computer (from memory management, to complex modesetting with all kind of tweaks to the utterly crazy 3d engine and everythings that comes with it) Unlike others piece of hardware, when it comes to fully use a GPU acceleration, there is no common denominator that we would be able to put in the kernel (like a tcp stack or filesystem). I am sure very few people would like to see a full GL stack into the kernel.
                      ...
                      Jerome
                      Now this kind of view is the most constructive but still not good enough for me.

                      I strongly disagree with the view presented here:
                      There IS a common denominator, it's graphics card.
                      Its a category that already defines a certain feature set roughly.
                      And with that implicitly formulated infrastructure requirements in the kernel.


                      Please improve the way State Trackers are handled. There should be a kernel state tracker api.
                      Drivers should present their api's as state trackers to the kernel.
                      Not the other way around.

                      The differences are how the functions it does are implemented. You know how most stuff handles that? By making drivers, and putting all of the differences in the drivers.

                      Looks like the DRM guys tried to put too much stuff in the kernel to unify things without making those things generic enough to be useful.

                      Example, sound cards also have dedicated memory.
                      Why not think of a general way of doing memory management for and by subsystems.

                      A full gl stack in kernel would be kernel+drivers+userspace-api's.
                      What we need most in kernel is graphic cards api to let graphic cards drivers hook in.
                      Something like ALSA for graphics cards.
                      Then we need something to present api's to userspace in a unified way.
                      If this is done in a good way, there shouldn't be api's hardcoded in it.
                      All of the available stuff should be available by drivers and/or libraries.
                      That's it! Software OpenGL rendering with a library would be part of this.

                      A device is for implementing functions.
                      Having software libraries as fallback for that is not a bad thing of course.

                      Thankfully Keith Packard didn't got his way of merging the drivers with the kernel or this situation would be the same thing many times over again and worse.


                      If you think memory management for dedicated memory for all kinds of stuff is bloated or far fetched.
                      Then you just aren't capable of writing a decent GENERAL api-kernel stuff.
                      Then adjust your attitude, because making stuff limited as you do will cause you to make the same KIND of mistake over and over again.

                      Comment


                      • #12
                        Originally posted by 89c51 View Post
                        i hope that this discussion will lead the devs to a solution for the development model. The manpower issue will probably remain.
                        I hope this discussion doesn't lead to a "solution" that involves making technical compromises for the sake of an artificial mandate to accommodate an unrealistic development model. Sure Linus -- pound the round peg hard enough and it'll fit the square hole one way or another.

                        Comment


                        • #13
                          Originally posted by plonoma View Post
                          Look at windows api's.
                          Not if you value your sanity.
                          I looked at the Windows API once. Never again.

                          Comment


                          • #14
                            Multiple cards that can be hotswitched without the software restarting is a basic requirement for an kernel that wants to be able to be generally usable everywhere.


                            As a programmer I want to be able to use all the computing power together.
                            Want to be able to use multiple graphics cards to calculate different parts with OpenCL.
                            Of course the Operating System must make it possible that multiple processes can use the graphics card. Having some process manager for the user needs to be done too of course.
                            And it must be possible for programmers to set optimal GPU affinity for multiple GPU's.

                            Comment


                            • #15
                              Originally posted by pvtcupcakes View Post
                              Not if you value your sanity.
                              I looked at the Windows API once. Never again.
                              I know.

                              I'm talking about looking at different views.

                              Because it's something else, the problems with it are better noticeable.
                              I meant using them as a reference, starting point for improvement brainstorming of some sort.

                              Comment

                              Working...
                              X