Announcement

Collapse
No announcement yet.

Concerns Over Merging Drivers Back Into The X Server

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Concerns Over Merging Drivers Back Into The X Server

    Phoronix: Concerns Over Merging Drivers Back Into The X Server

    While development efforts within the X.Org community are now ramping up for the release of X Server 1.9 that should arrive in August, there is an ongoing discussion concerning a planned long-term change for the X Server: pulling the drivers back in...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    hasnt there been a similar case with hal?
    or is it on a different level?
    actually i didnt really get any positive aspects of merging back into X except the intel driver having some code-sharing benefit.

    Comment


    • #3
      why not using the modularized thing?

      Comment


      • #4
        why not using the modularized thing?
        Currently the most successful project for create large amounts of drivers for Linux is the Linux kernel itself.

        Well the most successful for any open source OS, period. Linux now has more drivers, more driver features, higher quality drivers, and better tested drivers then any of the BSD's or OpenSolaris or anything else.

        And they got this way by having a single monolythic code base with no internal APIs or ABIs. This allows rapid development, rapid changes, while not having to give a darn about backwards or forwards compatibility with existing or future Linux versions.

        And completely abandoning any notion of binary compatibility with proprietary drivers.

        A Linux driver is specific for a specific version of Linux and this has paid off.

        The only part that Linux cares about keeping stable is the external interface for userspace applications and drivers. They are not perfect, but generally speaking you won't run into much problems by updating your kernel.

        In comparison X.org/Xfree always put a high priority on establishing protocols, ABIs, APIs, and making sure that drivers they make can run quite happily on many multiple versions of the X server and they kept supporting proprietary drivers at a high priority.


        The results kind speak for themselves...


        So in terms of Linux vs X.org the X.org development process for X is much more difficult, much slower, and with all the extra work and code that goes into maintaining ABI/APIs compatibility across multiple versions it's much buggier.

        Look at Linux kernel's TMS/UXA, DRI2, and KMS stuff. This is very huge amount of work that went into the kernel to support graphics and it was pulled off with relatively little pain.

        All the new graphic stuff in Linux was done and ready for testing and end user consumption (that is it was actually being included in the distributions by default) long before any of the related X.org bits were. The only reason that stuff in the kernel did not get tested and debugged sooner and better then it was was because there was no readily available userspace portion to go with it.


        I expect that the Xorg developers wish to abandon the overhead of having to coordinate and maintain API/ABI changes and just abandon that altogether and go with a quick-get-it-done approach.

        Sure it makes it difficult for users to test out drivers individually, but might make the pace of development go much quicker so that in the long run everybody gets the newer stuff faster.


        --------------------

        What is so horrible about releasing a X server/X drivers every 3-4 months versus the current status quo of drivers getting released randomly and the X server getting released every 6 months + 4 months of delays?

        To me it seems like a win.

        Comment


        • #5
          I support the proposed change (most of it anyways) but I have to disagree with some of the things you just said...

          Originally posted by drag View Post
          In comparison X.org/Xfree always put a high priority on establishing protocols, ABIs, APIs, and making sure that drivers they make can run quite happily on many multiple versions of the X server and they kept supporting proprietary drivers at a high priority. The results kind speak for themselves...
          I believe the drivers and Xorg *were* integrated until relatively recently; most of Xorg development was done with the drivers and server integrated. To the extent that you want to criticise Xorg (which IMO is missing the point anyways, since most of the problems attributed to Xorg are actually related to other parts of the stack) I think that would be an argument *against* integration, not for it.

          Originally posted by drag View Post
          So in terms of Linux vs X.org the X.org development process for X is much more difficult, much slower, and with all the extra work and code that goes into maintaining ABI/APIs compatibility across multiple versions it's much buggier.
          I haven't seen much effort going into maintaining compatibility. Drivers need to be able to work with older Xorg releases (and kernels) simply because new hardware comes out all the time and most distros do not update xorg (or kernel) between major distro releases.

          Originally posted by drag View Post
          Look at Linux kernel's TMS/UXA, DRI2, and KMS stuff. This is very huge amount of work that went into the kernel to support graphics and it was pulled off with relatively little pain.
          DRI2 took nearly three years; I wouldn't call it "relatively little pain"

          UXA is mostly Xorg, isn't it ? DRI2 involved coordinated changes to Mesa, X and DRM. What is TMS ?

          Originally posted by drag View Post
          All the new graphic stuff in Linux was done and ready for testing and end user consumption (that is it was actually being included in the distributions by default) long before any of the related X.org bits were. The only reason that stuff in the kernel did not get tested and debugged sooner and better then it was was because there was no readily available userspace portion to go with it.
          This doesn't match what I saw. Userspace and kernel code were developed in lockstep, and the pacing item was having the kernel code sufficiently solid that it could be moved out of staging. Don't think I saw many cases where kernel was waiting for userspace; mostly the other way round.

          Originally posted by drag View Post
          I expect that the Xorg developers wish to abandon the overhead of having to coordinate and maintain API/ABI changes and just abandon that altogether and go with a quick-get-it-done approach. Sure it makes it difficult for users to test out drivers individually, but might make the pace of development go much quicker so that in the long run everybody gets the newer stuff faster.
          The API/ABI thing is a red herring IMO. The real wins here are (a) getting mesa, kernel and X onto a common quarterly release cycle, allowing most driver releases to align with X, and (b) getting closer to continuous integration between driver development and X development activities, where driver devs are testing with the latest X and vice versa. Neither of those actually require integration, although if everything is done properly then integrating the trees will simplify things like build/test infrastructure.

          Originally posted by drag View Post
          What is so horrible about releasing a X server/X drivers every 3-4 months versus the current status quo of drivers getting released randomly and the X server getting released every 6 months + 4 months of delays? To me it seems like a win.
          Getting the entire graphics stack onto a common release cycle is a Good Thing IMO. The only thing we need to confirm is that we don't lose the ability to build and test drivers in the latest tree against earlier X server releases, unless everyone is 110% confident that distros will all suddenly start updating their Xorg and kernel code every quarter. Without that, new hardware enablement is going to be a mess since distros won't be using the latest Xorg but users will still want to use their newly released hardware.
          Test signature

          Comment


          • #6
            I'm a firm believer that the developers should be dictating this, not us or the distros

            The X developers always recommend an X server to use with any given driver, if a user or distro decided to use a different version than recommended they should have to do the leg work

            This should speed up development and also code quality as in the example given above of a broken input driver, this will be spotted much quicker and fixed quicker than if people were purly testing out a DDX driver

            Comment


            • #7
              Though there is a little problem with merging drivers with X... one driver cannot be merged and this particular one will hopefully become very important in the future. It's the Xorg state tracker in Mesa, a generic 2D driver written on top of a 3D engine.

              -Marek

              Comment


              • #8
                Originally posted by FireBurn View Post
                I'm a firm believer that the developers should be dictating this, not us or the distros

                The X developers always recommend an X server to use with any given driver, if a user or distro decided to use a different version than recommended they should have to do the leg work
                This is how forks happen. If the X server decides to use a development model that the distros can't feasibly maintain, the entire X server project will be forked or replaced. It's one thing to backport kernel modules and Xorg drivers but, it's another thing completely to entirely replace the graphics stack every 3 months. Regardless of how attractive this option seems for Xorg developers, if distros can't support it, it's no longer useful.

                Comment


                • #9
                  Originally posted by sdennie View Post
                  This is how forks happen. If the X server decides to use a development model that the distros can't feasibly maintain, the entire X server project will be forked or replaced. It's one thing to backport kernel modules and Xorg drivers but, it's another thing completely to entirely replace the graphics stack every 3 months. Regardless of how attractive this option seems for Xorg developers, if distros can't support it, it's no longer useful.
                  You're dreaming! X development is healthier now than it has ever been.

                  The most forking we have seen was when XGL and AIXGL were going head to head. When there was a clear consensus, Novell dropped their competing tech (however superior it might have been) and joined everybody else.

                  Comment


                  • #10
                    Originally posted by d4ddi0 View Post
                    You're dreaming! X development is healthier now than it has ever been.

                    The most forking we have seen was when XGL and AIXGL were going head to head. When there was a clear consensus, Novell dropped their competing tech (however superior it might have been) and joined everybody else.
                    I don't doubt that. What I'm saying is that if a core component of all distros becomes too cumbersome for the distros to maintain, that component will be forked or replaced. A distro has to weigh the benefits of updating packages to support new hardware. If all distros decide that the risks outweigh the benefits to updating a package, they either won't update or will find a less risky package to include (even if it means forking or replacing the package in question).

                    Comment

                    Working...
                    X