Announcement

Collapse
No announcement yet.

How Important Is The Wayland Display Server?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by MostAwesomeDude View Post
    VNC can't possibly beat X in speed because it has to transfer more data.
    Thats true, but radical shrinking of codebase by a) moving all paintings to client side and b) ignoring X protocol is big architectural plus for Wayland in my book. Need remote access? On top of Wayland you can run either X protocol server (current X11, for example) or VNC protocol server - clean and simple. Application paints content of window and Gnome adds decorations to it, all on client side. Then it forwards window to server, server does nothing more then combine windows from all clients and forward resulting image to hardware in aesthetic way (flicker-free).To me, this sounds like things _should_ be.

    Comment


    • #72
      Originally posted by Kjella View Post
      Well, this is starting to turn into one of those open source sessions I hate where it's "pin the tail on the donkey" because you can't even figure out which module to blame. So let's say I get one of these triple-head cards AMD has been showing off.

      I start with two hardware accelerated composited desktops.
      - X only includes 2D and Xv hardware acceleration; 3D and other acceleration is not part of Xorg
      - most users of composition are using GL for the compositor, again not part of X or X drivers (X has an extension for GL, but the GL driver is not part of Xorg)
      - the compositor is not part of Xorg

      Originally posted by Kjella View Post
      I launch a 3D accelerated game in one.
      - 3D acceleration is not part of Xorg

      Originally posted by Kjella View Post
      I launch a 1080p tearfree video in the other.
      - the logic which makes composited output tear-free is not part of Xorg

      Originally posted by Kjella View Post
      I plug in a third screen (hotswap) to get a third desktop.
      - this *is* part of X

      Originally posted by Kjella View Post
      Bonus credit:
      Let me plug in another graphics card and run CF/SLI.
      - CF/SLI are 3D-related, not part of Xorg

      Originally posted by Kjella View Post
      Make that video hardware accelerated on shaders or fixed function.
      - video decode APIs are not part of Xorg; normally direct rendered

      Originally posted by Kjella View Post
      Let me move all those to different screens...on different graphics cards.
      - window move goes through X, but X just notifies the non-X code doing compositing, 3D and video decode

      Originally posted by Kjella View Post
      Now is any of this at all related to X? X drivers? Mesa? DRI? DRM?
      - most of it is outside X in mesa, drm and other non-X drivers
      - the DRI extension is what allows all this non-X stuff to stay in sync with X re: window locations etc..

      One possible for confusion is that most of the work being done on the non-X components (mesa, drm etc..) is being done by Xorg developers who recognize their importance even if they are not actually part of X. Xorg, mesa and drm have separate release schedules and version numbers even though you need a compatible set of all three for a working system.

      Originally posted by Kjella View Post
      Or maybe I'm going about it the wrong way, if you were given a blank sheet and those requirements, would you create X? Or what would the ideal graphics stack look like? I'm struggling to get the picture of what does what, and what should be doing what...
      Here's where it gets complicated. Everyone agrees that something like Wayland represents a much simpler display environment, and that it reflects the current reality that an increasing amount of output bypasses X and is displayed with direct rendering... and the direct rendering drivers are *not* part of Xorg (although I think they should be).

      The important thing to understand is that a lot of what users think of as "Xorg" is actually not part of X, and is just big complex code which happens to be used with X although it is not part of the Xorg releases. That said, architectural work done in these non-X portions (eg the implementation of KMS) usually involves both an X and a non-X component, so that progress on the non-X component affects X release schedules.

      What the X developers are trying to explain is that most of the *user* problems attributed to X (stability, crashes etc..) are driver related, where the bulk of that driver code is *not* part of X but *is* part of the Linux graphics ecosystem which everyone blindly associates with X.

      The main effort required is on the driver stack which is used by both X and Wayland, which (among other things) involves moving almost all of the graphics hardware support *out* of X and its drivers. It is this major re-architecture of the driver stack which has consumed a lot of the Xorg developer time over the last couple of years, and it is the results of that rearchitecture which makes X-less display environments like Wayland possible without re-inventing the entire stack.

      It's easy to fall into the trap of assuming that the legacy code in X is the cause of (a) user problems and (b) the perception of unpredictable release schedules. As a number of developers have tried to say, user problems are primarily related to the driver stack which is going to be shared by Wayland, and the perception of release schedule unreliability results from a combination of content-based (rather than schedule-based) releases and Xorg project managers who are not particularly PR-oriented.

      Wayland will definitely provide a smaller "core" code base which has to be a good thing; the mistake is assuming that the current larger code base is the cause of user problems. Put differently, the "good things" Wayland brings are largely orthogonal to the problems users experience today, but the good news is that the Xorg community has been working hard for a couple of years on fixing those problems and Wayland will benefit equally from that work.

      Just to be clear, I think Wayland is an important project; I just think the "Wayland vs Xorg" thing is missing the point.
      Last edited by bridgman; 14 September 2009, 11:19 AM.
      Test signature

      Comment


      • #73
        Originally posted by M?P?F View Post
        What is the answer? Xorg???
        Please explain.
        The solution is to fix the driver model and make X independent from your hardware.


        You're right but what's the use of keeping a huge piece of code with possibly a lot of dead code ? Why not using a new window server made for modern uses ?

        Because the 'huge piece of code' is what you need for application compatibility. If your running Wayland you'll still have to run the vast majority of your graphical applications in a X server that is rendered off-screen and then composited into Wayland.

        Also X is extensible. Xorg has the same features you'd get from Wayland.

        What's the problem with the driver model? The current model seems perfectly OK to me.
        It's shit, that's why. And it's incompatible with Wayland anyways so you'd have to completely give up your proprietary Nvidia drivers and whatever else your used to using.

        We have two sets of independent drivers, one from Xorg and the other from DRI. This is a bad design, we should only need one set of drivers.

        Hmm hmm, I have to disagree with you, by the time Wayland will be ready, most of the graphical toolkits will be ported to Wayland. As 99% of your applications uses them, there is actually no problem at all (except for emacs )
        Am I mistaken ??
        Yes.

        Don't you think using several tty for multi-users is a good idea ? this is the current state of the X-server is will not change any-time soon AFAIK.
        Wayland solves this problem.
        No it does not. You still need the Gallium and driver improvements to run Wayland and that will also solve the problems with Xorg and multiple graphical sessions.

        What is the answer? Xorg???
        Please explain.
        The solution is to remove the drivers from Xorg and use a unified driver model. Which is what is required for Wayland and will simulatiously solve most of the problems with X you've outlined.

        Wayland's a attractive solution because it's simplier. But the problem is that by the time you have Wayland up and running with feature parity with X then it'll end up being massive and complex also.

        That's the problem every time you want to abandon a old code base and start over.. you end up wasting years and years of effort to force people to port over to the new code base and you end up with something that is barely better then what you started with.

        Xorg is crippled by a legacy driver development model. It was originally designed when operating systems had zero graphics capabilities and video cards were little more then memory addressable framebuffers (meaning just a section of memory you dump textures into) with no acceleration features at all.

        So to make everything portable X took on all the drivers you needed. That way you could run X on operating systems that otherwise had no graphical capabilities.

        Nowadays we have all sorts of acceleration and GPUs are being used for much more then just rendering the display screen. However Xorg is still using the old design were it assumes that Linux has no graphical capabilities on it's own and tries to take over everything.

        However now it's the opposite. Linux has the ability to access graphics and can be used to handle things in a much cleaner way then what X does. This is how Wayland works and why it's simplier: it requires Linux to do all the heavy lifting.

        So the solution, which is what is being worked on, is to move X off the hardware and make it just another graphical application.

        You remove the 'Device Dependent X' (or at least make it tiny) and then all you have left is C bindings for the X protocol and various Xorg extensions. This way you get the better driver model, better performance, better multiuser support, modern features, and everything else you get from Wayland, but are able to retain backwards compatibility and don't have to rewrite everything. For the OS to support X then all you need is a compatible Gallium state tracker and your set.. no more having to depend on X for upgrades to your drivers and all of that.


        --------------------------


        As far as X being a networking protocol.. it's superior in performance and features then pretty much anything else out there.

        For example, with AIGLX extension I have full networking capabilities AND OpenGL acceleration. I can run X applications from a headless server and provide OpenGL acceleration from the video card installed on my local machine. I can run Fedora in a virtual machine in Ubuntu, access Ubuntu over X networking, log in through GDM, and get opengl acceleration.

        The problems you run into with X Windows networking is round-trip latency, not bandwidth. That is the amount of time it takes for a action to go from your desktop to the remote machine and back again. Many applications are not really developed with networking in mind and you end up with a large amount of round trips with each application window refresh. Improved C bindings like xcb, desktop composition, xdamage extension, and other things like that improve the situation somewhat, but fundamentally it's a widget and application design issue.

        Modern versions of GTK and Gnome applications are much more networking friendly then they used to be. Nowadays running a full screen Gnome session over a corporate-style (busy ethernet with conjested backbones) provides a much superior and snappy experience then what is possible with Windows and Cytrix remote desktop. I am sure that QT4 has other similar improvements.

        Comment


        • #74
          Originally posted by bridgman View Post
          Just to be clear, I think Wayland is an important project; I just think the "Wayland vs Xorg" thing is missing the point.


          Yes. yes. yes.

          The driver model for Linux is currently broken. It's not something that is caused by incompetance or whatever.. it's just a accident of history. Linux grew up and grew into the problems we are facing. That is were the issues with stability and lack of multiuser support come from.

          By fixing the Linux driver stack it will solve most of the problems people are having with 'X'. It's also what is required to run Wayland.

          Transitioning into a single-driver model that can allow proper access to the GPU and run all the different APIs people want for applications, instead of having 2D and 3D seperate drivers, is the biggest step to realizing the improvements people want.

          Comment


          • #75
            Originally posted by bridgman View Post
            "good things" Wayland brings are largely orthogonal to the problems users experience today
            I agree but X users complain also on (at least) 2 other big issues:

            - Startup time
            - Overall responsiveness to user interactions
            - Overall system memory usage (crucial benchmark for desktop/mobile)

            Lets wait and see how Wayland address those 3 and then, based on numbers, we can discuss its viability. But I agree with you that modularization, G3D, open drivers, DRI etc. are really pushing graphics stack forward at amazing pace and made Wayland actually possible. Thanks to contributing companies.

            Comment


            • #76
              I'm gonna quote a bunch of you, and then reply. It'll be AWESOME.

              Originally posted by R3MF View Post
              he who code wins.

              i'm not exactly hoping to replace X.org, but if X.org cannot keep up with the pace of development then i won't cry when distro's make the hop.
              'k. BTW, who's coding, and what are they coding? I hope you're not talking about Wayland; Kristian is an Xorg member, and he spends more time on Xserver, making DRI2 and all that stuff work, than Wayland.

              Originally posted by Kjella View Post
              Well, this is starting to turn into one of those open source sessions I hate where it's "pin the tail on the donkey" because you can't even figure out which module to blame. So let's say I get one of these triple-head cards AMD has been showing off.
              Ah, a discerning customer. 'k.

              I start with two hardware accelerated composited desktops.
              'k. This one's easy. X supports both server-side automatic compositing, and fancy decorated client-side compositing. I'll assume that you're going to be a demanding person and use compiz.

              I launch a 3D accelerated game in one.
              I predict that your discerning taste in graphics hardware extends to your gaming preferences. 'k, you launch xmoto.

              (I'm not going to say TF2, because TF2 doesn't work yet on that particular card. It does work on earlier cards, though!)

              I launch a 1080p tearfree video in the other.
              Ug. This video won't play at full speed; nobody's moved enough of the video decoding over to HW to have this play fast enough. It's an interesting problem; it's not that the CPU can't go fast enough, but that it takes too long to transfer fully-decoded frames over to the video card.

              (This does work, actually, for most people, but I'm going to assume you're one of the unlucky bastards that it doesn't work for, either because your PCI/AGP/PCIe bridge is a piece of shit, or some other reason.)

              I plug in a third screen (hotswap) to get a third desktop.
              'k.

              Bonus credit:
              Let me plug in another graphics card and run CF/SLI.
              Your motherboard explodes. You lose your hand and one eye. Try turning the power off next time.

              CF and SLI both are chipset-specific technologies that rely on having the proper bits set in the GPUs. We don't have this information, and nobody's stepped up to reverse-engineer it.

              Make that video hardware accelerated on shaders or fixed function.
              The easy way to do this would be to add VDPAU to Gallium. Of course, XvMC is already in Gallium and works fine on nouveau. (And it'll work on r300g and i915g, when me and Jakob respectively get more time to work out the kinks.)

              Let me move all those to different screens.
              'k.

              ...on different graphics cards.
              'k. You'd better be using an Xserver that doesn't have broken multi-card; it doesn't get much testing due to its users being a very minor subset of all the Xorg users out there.

              So. All the 'ks are Xorg stuff; everything else is in Mesa/Gallium. Just a bit of perspective. Also, kudos for having the lobotomy necessary to watch a movie and play a game at the same time.

              Originally posted by 89c51 View Post
              will the new implementation of X (X12) make the codebase cleaner, smaller, easier to comprehend and easier for new people to jump in the development process??
              It won't be a new implementation; it will be a gradual set of changes that applies to the current codebase. Also, again, it'll be a ways off. That said, yes, no, maybe, and no. The two main things that keep people from contributing appear to be a lack of C comprehension and no clear target to work towards. People show up in IRC, ask how they can help, and then wander off, never to return. Not exactly useful.

              Originally posted by RealNC View Post
              will X12 provide an API as easy to code for as OS X Cocoa? Last time I looked into the X API, I couldn't eat for 2 days
              If you think the API is bad for Xlib, never look at the server internals; it'll drive you mad.

              Have you considered using GTK, Qt, FLTK, Tk, or any other toolkit? They learned Xlib, so that you don't have to.

              Of course, if you absolutely insist on programming directly, you should switch to XCB. http://xcb.freedesktop.org/

              Comment


              • #77
                Originally posted by MostAwesomeDude View Post
                CF and SLI both are chipset-specific technologies that rely on having the proper bits set in the GPUs. We don't have this information, and nobody's stepped up to reverse-engineer it.
                FWIW, most multi-GPU rendering these days is AFR due to the heavy use of post-processing and intermediate render targets. The only unreleased info is for the video compositor which switches between the outputs from the two cards, avoiding the need for an inter-card blit, but that's just a performance optimization and it's only implemented on high end cards anyways. Everything else is big heaps of driver code.

                Originally posted by mirza View Post
                I agree but X users complain also on (at least) 2 other big issues:

                - Startup time
                - Overall responsiveness to user interactions
                - Overall system memory usage (crucial benchmark for desktop/mobile)
                AFAIK startup time is mostly determined by driver initialization, things like reading EDID. KMS moves that work into the system startup rather than in X (or Wayland).

                Not sure where the issues are with responsiveness, but I imagine they are more related to input than output.

                AFAIK X doesn't use much memory itself but it does create and manage buffers on behalf of the application. I doubt that would change much with Wayland, with the possible exception of having the memory consumption showing up as part of the application rather than as part of X.
                Last edited by bridgman; 14 September 2009, 12:05 PM.
                Test signature

                Comment


                • #78
                  Originally posted by mirza View Post
                  I agree but X users complain also on (at least) 2 other big issues:

                  - Startup time
                  In my experience most "X" startup time is due to some combination of DE loading/init and DRI init. I used to be able to cut my X startup time by 2 to 3 seconds by disabling DRI (or by running a system like OpenBSD where it wasn't supported in the first place); not sure if that's still the case.

                  Comment


                  • #79
                    Originally posted by RealNC View Post
                    "Remote X" is hopelessly outdated by now anyway. Virtually everyone uses VNC or something NX-based.
                    Remote X is one of the best things about Linux; being able to run any program anywhere from anywhere is a huge benefit compared to Windows expecting you to sit at the console all the time. I routinely run X programs from my Linux servers with the display on my Windows laptop; heck, I even run Windows programs on my Linux servers displaying to my Windows laptop when the software needs access to the server hardware rather than the laptop hardware.

                    While I use VNC for high-latency connections where X doesn't work well, it's a horrible kludge in comparison.

                    Plain old X over the network is pure suckage today.
                    I used to run Windows in a PC-emulator from HP or Alpha unix systems using X11 over a shared 10Mbps Ethernet; if running X applications over switched Gigabit ethernet is 'pure suckage', that must have been horrible.
                    Last edited by movieman; 14 September 2009, 12:23 PM.

                    Comment


                    • #80
                      Originally posted by Ex-Cyber View Post
                      In my experience most "X" startup time is due to some combination of DE loading/init and DRI init. I used to be able to cut my X startup time by 2 to 3 seconds by disabling DRI (or by running a system like OpenBSD where it wasn't supported in the first place); not sure if that's still the case.
                      Yes. Gnome&KDE are taking all the time in startup while plain X starts very fast in my old laptop (~1 second).

                      Also memory usage is caused by applications wasting the memory. But maybe it would be better if X provide some way to operate with compressed images which should reduce memory usage a lot for large pixmaps. But I don't see much others options in X part to reduce memory usage because most of it is anyway taken caused by applications or drivers (yes. Good example at least was nvidia blobs eating a lot per GL application because of bad compilation options.)

                      Comment

                      Working...
                      X