Announcement

Collapse
No announcement yet.

The Issues With The Linux Kernel DRM, Continued

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • The Issues With The Linux Kernel DRM, Continued

    Phoronix: The Issues With The Linux Kernel DRM, Continued

    Yesterday Linus voiced his anger towards DRM, once again. But not the kind of DRM that is commonly criticized, Digital Rights Management, but rather the Linux kernel's Direct Rendering Manager. With the Linux 2.6.39 kernel it's been another time when Linus has been less than happy with the pull request for this sub-system that handles the open-source graphics drivers. Changes are needed...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    IIRC a radeonhd developer was proposing to merge the components of the open source drivers some time ago but nobody cared about it...

    Comment


    • #3
      I feel sorry for those GPU developers. That stuff is a nightmare.

      Comment


      • #4
        It seems like it would have been better for DRM developers to stay outside of kernel at least until DRM would really become stable.

        Comment


        • #5
          Jerome's reply essentially mirrors my commentary the last time around - this is a symptom of forcing a one-size-fits-all development cycle upon a group of projects with fundamentally different requirements. What's being realized in the web browser wars is similar. Developing a category of software in which the best available implementations cover only a fraction of the spec isn't even the same sport as maintenance of a mostly complete implementation.

          The vanilla kernel isn't exactly a high-assurance system built following formal methods to standards of provable correctness. Small pieces of it might be - by downstream users putting Linux on their ICBM guidance systems, but not every last component of every driver. They need to work together to discover an approach with variable flexibility for variable requirements.

          Comment


          • #6
            Don't we (and the few DRM developers) already have enough problems/work with the graphics stack that we don't really need Linus being a diva permanently bitching about code being two weeks late?

            Comment


            • #7
              The argument about DRM stuff taking too much time to get to the end user seems kind of bogus. I mean, nothing is stopping them from releasing their development code before it lands into Linus' tree, if some distribution wants it sooner. Distribute an out of tree driver with the current fixes to the distros that want it and then, at the next merge window, you merge it upstream.

              Comment


              • #8
                i hope that this discussion will lead the devs to a solution for the development model. The manpower issue will probably remain.

                Comment


                • #9
                  Don't care. I'd rather the stuff work 1/2 way than not at all. I ended up with one of my nvidia gpu's messed up (the g98 gpu 8400gs) because it went in and was trying to do hardware acceleration and not using the gpu right. When I switched to a completely untested unready radeon 5550 it worked again. It uses the software rasterizer and is a step back but at least I don't have to reboot, switch back to the onboard gpu, reboot again. Unplug the monitor from add in card to on board gpu. Power back on and get back into linux.
                  I think I ran 3 x servers and 3 different open source video drivers during the Fedora 10 cycle. One of the x servers screwed up so bad I had to switch back to text mode run 3 and do a yum downgrade on it. This was like 3 months after release of fedora 10. Now they won't do that junk any more. You can't get them to try a new video driver or x-server more than 1 or 2 and fixes during alpha and beta. The new guy running the feodra project sucks compared to the guy they dumped last year.

                  I'd play more with arch linux if I wasn't enjoying my little 5550 maxing out graphics on games. I love that frikkin card. I never even considered HIS until I tried this card.
                  Quit trying to safe up the place. Either shovel us the untested crap or put us all on safe boring no drama stable stuff. I got really dissenchanted during the 13 cycle when they stopped shoving us 2 kernels and 3 x-windows. Which leaves kernel 2.6.36 and .37 without anybody testing anything on it. Because it came out mid releases. So of course .38 and .39 are going to be full of untested crap. At least Ubuntu is going to work over .37 I think.

                  Comment


                  • #10
                    I think Linus may be getting a little too far away from his hacker roots here. So what if you merge fresh code into mainline, I don't think it's intended to be stable in the same sense as something RedHat would push to RHEL customers anyhow. It's gonna be even harder to attract developers if you run things all formally like a company - I think a lot of volunteer coders like hacking on Linux in the evenings so they can push code out the door and escape that kinda repressive crap from their day job.

                    Comment

                    Working...
                    X