Announcement

Collapse
No announcement yet.

Luc Calls For A Dead Linux Desktop If Keith Gets His Way

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Even in the past it relied on a macro Hell to work.

    Comment


    • #42
      Originally posted by Xanbreon View Post
      I suppose xorg is the part of linux I am most afraid of breaking, if the worst comes to the worst and i build a new kernel that doesnt boot, no harm no foul, I reboot with one of the other i have installed, with xorg I might have to spend god knows how long purging configs and removing/reinstalling bits.

      If the drivers where they own, upgrading one wouldn't be as scary, you just remove/reinstall one part without to much fuss.
      I don't quite understand the implications of what's being discussed, and reading Luc's blog post leaves me scratching my head. But if this is about having a fat X server that includes everything and users (regular mortals as well as distributions) are forced to rebuild the whole lot to pick up some changes, it doesn't sound very appealing.

      As it is now, anybody with half a clue can try new drivers easily, relying on the compatibility with the X server provided by the distribution. Asking users/testers to go through the complications of building the server will lower the number of people willing to play around. And I agree with you, X looks really ugly and hairy, and it contains 2 of the last things you'd like to see broken: input and display. It is bad enough that at times you have to update to a new kernel on top of building mesa, drm and 2d drivers. As a mere user, I know I wouldn't go the extra mile of compiling X.

      Also, would this have a negative effect on the way distributions are currently able to cherry pick fixes or improvements without updating the whole stuff (and thus avoiding hardly tested code)?

      Comment


      • #43
        Originally posted by pingufunkybeat View Post
        I believe that it's still possible to check out the drm part of the kernel and compile it as a module to run with your kernel.

        The problem is that the binary compatibility breaks more often nowadays, so there's not guarantee that it will work.
        Which is what I'm talking about. The out-of-tree version could be build with several kernels. The in-kernel version is really only for the kernel it's embedded in.

        Comment


        • #44
          Originally posted by pingufunkybeat View Post
          It replaces all of Mesa (much of which is shared now)
          Mesa is not part of X actually; It's an OpenGL implementation, and as such it's only one among several. There's nothing wrong with offering your own GL implementation.


          all of kernel drm (much of it is shared now)
          It's a driver, what did you expect :P It's like saying the ATI DRM replaces the Intel DRM...


          and the bottom half of the X stack, and everything related to direct rendering, all of which is shared now.
          Dunno about that. By reading Phoronix through the years I was under the impression that fglrx does not do that, only the NVidia blob does.

          Also it replaces all of Gallium3d, which also has lots of shared code.
          Now you lost me completely. How does it replace Gallium3D if it doesn't even need it? (Btw, I don't even have Gallium3D installed and am using the open drivers currently, so what's there to replace anyway?)

          Comment


          • #45
            Originally posted by RealNC View Post
            Mesa is not part of X actually; It's an OpenGL implementation, and as such it's only one among several. There's nothing wrong with offering your own GL implementation.
            Sure.

            But this is clearly not what an open source driver should do, as long as there is an existing implementation in Mesa. Open drivers should use Mesa, and not bundle their own GL implementation.

            It's a driver, what did you expect :P It's like saying the ATI DRM replaces the Intel DRM...
            Parts of it are shared, part card-specific, as far as I understand. All the drivers use a standard interface (based on GEM) to communicate with the kernel. Binary blobs bypass all of that.

            Now you lost me completely. How does it replace Gallium3D if it doesn't even need it? (Btw, I don't even have Gallium3D installed and am using the open drivers currently, so what's there to replace anyway?)
            You don't need Gallium3d if you use classic Mesa drivers (I don't ATM), but this is what all the drivers are migrating towards.

            And the nvidia and ati binary drivers do pretty much exactly the same thing Gallium3d does internally. An intermediate representation which allows them to share large parts of the OpenGL code and acceleration infrastructure etc. etc.

            Comment


            • #46
              Originally posted by pingufunkybeat View Post
              Sure.

              But this is clearly not what an open source driver should do, as long as there is an existing implementation in Mesa. Open drivers should use Mesa, and not bundle their own GL implementation.
              And they can be changed to accommodate API changes in the DDX. From what I gather from following Git commits, it's pretty straightforward to keep up and not something that warrants pulling the drivers entirely within X.


              Parts of it [DRM] are shared, part card-specific, as far as I understand. All the drivers use a standard interface (based on GEM) to communicate with the kernel. Binary blobs bypass all of that.
              They don't bypass it; they simply don't use it, which is a different thing. But anyway, the in-kernel DRM is not part of X.Org either, so talking about X ABI breakage is a moot point here.

              You don't need Gallium3d if you use classic Mesa drivers (I don't ATM), but this is what all the drivers are migrating towards.
              See comment about Mesa above


              And the nvidia and ati binary drivers do pretty much exactly the same thing Gallium3d does internally. An intermediate representation which allows them to share large parts of the OpenGL code and acceleration infrastructure etc. etc.
              Except that it's too slow. The proprietary implementations are way faster (and will probably stay that way; a lot of money and man-hours went into their implementation and have been tuned and optimized to death in order to get as much performance as possible).

              Comment


              • #47
                Originally posted by RealNC View Post
                And they can be changed to accommodate API changes in the DDX. From what I gather from following Git commits, it's pretty straightforward to keep up and not something that warrants pulling the drivers entirely within X.
                But why would we even need more than a single generic ddx that uses KMS as far as open drivers go?

                Comment


                • #48
                  Originally posted by nanonyme View Post
                  But why would we even need more than a single generic ddx that uses KMS as far as open drivers go?
                  DDX is user-space. The in-kernel KMS is not intended to be used by DDX; it's intended to replace it entirely.

                  Comment


                  • #49
                    Yes and no... you still need a DDX because *something* has to accept modesetting and 2D/Xv acceleration requests from the X server, the question is to what extent a generic DDX is feasible -- which in turn implies something like Gallium3D for acceleration and a consistent-across-hardware-vendors modesetting interface.

                    AFAIK the current thinking is that memory management (mostly in the kernel, but with a non-trivial userspace component in libdrm-<hw>) is hardware-vendor-specific, and as a result the idea of a generic DDX doesn't seem as likely in practice as it did in theory.
                    Test signature

                    Comment


                    • #50
                      Originally posted by RealNC View Post
                      Except that it's too slow. The proprietary implementations are way faster (and will probably stay that way; a lot of money and man-hours went into their implementation and have been tuned and optimized to death in order to get as much performance as possible).
                      You know, there's a huge difference between slower, and too slow. r300g on my hardware may not get the same performance as fglrx, but it's already fast enough for all games.

                      Comment

                      Working...
                      X