Originally posted by MostAwesomeDude
View Post
Announcement
Collapse
No announcement yet.
How Important Is The Wayland Display Server?
Collapse
X
-
-
Originally posted by Kjella View PostWell, this is starting to turn into one of those open source sessions I hate where it's "pin the tail on the donkey" because you can't even figure out which module to blame. So let's say I get one of these triple-head cards AMD has been showing off.
I start with two hardware accelerated composited desktops.
- most users of composition are using GL for the compositor, again not part of X or X drivers (X has an extension for GL, but the GL driver is not part of Xorg)
- the compositor is not part of Xorg
Originally posted by Kjella View PostI launch a 3D accelerated game in one.
Originally posted by Kjella View PostI launch a 1080p tearfree video in the other.
Originally posted by Kjella View PostI plug in a third screen (hotswap) to get a third desktop.
Originally posted by Kjella View PostBonus credit:
Let me plug in another graphics card and run CF/SLI.
Originally posted by Kjella View PostMake that video hardware accelerated on shaders or fixed function.
Originally posted by Kjella View PostLet me move all those to different screens...on different graphics cards.
Originally posted by Kjella View PostNow is any of this at all related to X? X drivers? Mesa? DRI? DRM?
- the DRI extension is what allows all this non-X stuff to stay in sync with X re: window locations etc..
One possible for confusion is that most of the work being done on the non-X components (mesa, drm etc..) is being done by Xorg developers who recognize their importance even if they are not actually part of X. Xorg, mesa and drm have separate release schedules and version numbers even though you need a compatible set of all three for a working system.
Originally posted by Kjella View PostOr maybe I'm going about it the wrong way, if you were given a blank sheet and those requirements, would you create X? Or what would the ideal graphics stack look like? I'm struggling to get the picture of what does what, and what should be doing what...
The important thing to understand is that a lot of what users think of as "Xorg" is actually not part of X, and is just big complex code which happens to be used with X although it is not part of the Xorg releases. That said, architectural work done in these non-X portions (eg the implementation of KMS) usually involves both an X and a non-X component, so that progress on the non-X component affects X release schedules.
What the X developers are trying to explain is that most of the *user* problems attributed to X (stability, crashes etc..) are driver related, where the bulk of that driver code is *not* part of X but *is* part of the Linux graphics ecosystem which everyone blindly associates with X.
The main effort required is on the driver stack which is used by both X and Wayland, which (among other things) involves moving almost all of the graphics hardware support *out* of X and its drivers. It is this major re-architecture of the driver stack which has consumed a lot of the Xorg developer time over the last couple of years, and it is the results of that rearchitecture which makes X-less display environments like Wayland possible without re-inventing the entire stack.
It's easy to fall into the trap of assuming that the legacy code in X is the cause of (a) user problems and (b) the perception of unpredictable release schedules. As a number of developers have tried to say, user problems are primarily related to the driver stack which is going to be shared by Wayland, and the perception of release schedule unreliability results from a combination of content-based (rather than schedule-based) releases and Xorg project managers who are not particularly PR-oriented.
Wayland will definitely provide a smaller "core" code base which has to be a good thing; the mistake is assuming that the current larger code base is the cause of user problems. Put differently, the "good things" Wayland brings are largely orthogonal to the problems users experience today, but the good news is that the Xorg community has been working hard for a couple of years on fixing those problems and Wayland will benefit equally from that work.
Just to be clear, I think Wayland is an important project; I just think the "Wayland vs Xorg" thing is missing the point.Last edited by bridgman; 14 September 2009, 11:19 AM.Test signature
Comment
-
Originally posted by M?P?F View PostWhat is the answer? Xorg???
Please explain.
You're right but what's the use of keeping a huge piece of code with possibly a lot of dead code ? Why not using a new window server made for modern uses ?
Because the 'huge piece of code' is what you need for application compatibility. If your running Wayland you'll still have to run the vast majority of your graphical applications in a X server that is rendered off-screen and then composited into Wayland.
Also X is extensible. Xorg has the same features you'd get from Wayland.
What's the problem with the driver model? The current model seems perfectly OK to me.
We have two sets of independent drivers, one from Xorg and the other from DRI. This is a bad design, we should only need one set of drivers.
Hmm hmm, I have to disagree with you, by the time Wayland will be ready, most of the graphical toolkits will be ported to Wayland. As 99% of your applications uses them, there is actually no problem at all (except for emacs )
Am I mistaken ??
Don't you think using several tty for multi-users is a good idea ? this is the current state of the X-server is will not change any-time soon AFAIK.
Wayland solves this problem.
What is the answer? Xorg???
Please explain.
Wayland's a attractive solution because it's simplier. But the problem is that by the time you have Wayland up and running with feature parity with X then it'll end up being massive and complex also.
That's the problem every time you want to abandon a old code base and start over.. you end up wasting years and years of effort to force people to port over to the new code base and you end up with something that is barely better then what you started with.
Xorg is crippled by a legacy driver development model. It was originally designed when operating systems had zero graphics capabilities and video cards were little more then memory addressable framebuffers (meaning just a section of memory you dump textures into) with no acceleration features at all.
So to make everything portable X took on all the drivers you needed. That way you could run X on operating systems that otherwise had no graphical capabilities.
Nowadays we have all sorts of acceleration and GPUs are being used for much more then just rendering the display screen. However Xorg is still using the old design were it assumes that Linux has no graphical capabilities on it's own and tries to take over everything.
However now it's the opposite. Linux has the ability to access graphics and can be used to handle things in a much cleaner way then what X does. This is how Wayland works and why it's simplier: it requires Linux to do all the heavy lifting.
So the solution, which is what is being worked on, is to move X off the hardware and make it just another graphical application.
You remove the 'Device Dependent X' (or at least make it tiny) and then all you have left is C bindings for the X protocol and various Xorg extensions. This way you get the better driver model, better performance, better multiuser support, modern features, and everything else you get from Wayland, but are able to retain backwards compatibility and don't have to rewrite everything. For the OS to support X then all you need is a compatible Gallium state tracker and your set.. no more having to depend on X for upgrades to your drivers and all of that.
--------------------------
As far as X being a networking protocol.. it's superior in performance and features then pretty much anything else out there.
For example, with AIGLX extension I have full networking capabilities AND OpenGL acceleration. I can run X applications from a headless server and provide OpenGL acceleration from the video card installed on my local machine. I can run Fedora in a virtual machine in Ubuntu, access Ubuntu over X networking, log in through GDM, and get opengl acceleration.
The problems you run into with X Windows networking is round-trip latency, not bandwidth. That is the amount of time it takes for a action to go from your desktop to the remote machine and back again. Many applications are not really developed with networking in mind and you end up with a large amount of round trips with each application window refresh. Improved C bindings like xcb, desktop composition, xdamage extension, and other things like that improve the situation somewhat, but fundamentally it's a widget and application design issue.
Modern versions of GTK and Gnome applications are much more networking friendly then they used to be. Nowadays running a full screen Gnome session over a corporate-style (busy ethernet with conjested backbones) provides a much superior and snappy experience then what is possible with Windows and Cytrix remote desktop. I am sure that QT4 has other similar improvements.
Comment
-
Originally posted by bridgman View PostJust to be clear, I think Wayland is an important project; I just think the "Wayland vs Xorg" thing is missing the point.
Yes. yes. yes.
The driver model for Linux is currently broken. It's not something that is caused by incompetance or whatever.. it's just a accident of history. Linux grew up and grew into the problems we are facing. That is were the issues with stability and lack of multiuser support come from.
By fixing the Linux driver stack it will solve most of the problems people are having with 'X'. It's also what is required to run Wayland.
Transitioning into a single-driver model that can allow proper access to the GPU and run all the different APIs people want for applications, instead of having 2D and 3D seperate drivers, is the biggest step to realizing the improvements people want.
Comment
-
Originally posted by bridgman View Post"good things" Wayland brings are largely orthogonal to the problems users experience today
- Startup time
- Overall responsiveness to user interactions
- Overall system memory usage (crucial benchmark for desktop/mobile)
Lets wait and see how Wayland address those 3 and then, based on numbers, we can discuss its viability. But I agree with you that modularization, G3D, open drivers, DRI etc. are really pushing graphics stack forward at amazing pace and made Wayland actually possible. Thanks to contributing companies.
Comment
-
I'm gonna quote a bunch of you, and then reply. It'll be AWESOME.
Originally posted by R3MF View Posthe who code wins.
i'm not exactly hoping to replace X.org, but if X.org cannot keep up with the pace of development then i won't cry when distro's make the hop.
Originally posted by Kjella View PostWell, this is starting to turn into one of those open source sessions I hate where it's "pin the tail on the donkey" because you can't even figure out which module to blame. So let's say I get one of these triple-head cards AMD has been showing off.
I start with two hardware accelerated composited desktops.
I launch a 3D accelerated game in one.
(I'm not going to say TF2, because TF2 doesn't work yet on that particular card. It does work on earlier cards, though!)
I launch a 1080p tearfree video in the other.
(This does work, actually, for most people, but I'm going to assume you're one of the unlucky bastards that it doesn't work for, either because your PCI/AGP/PCIe bridge is a piece of shit, or some other reason.)
I plug in a third screen (hotswap) to get a third desktop.
Bonus credit:
Let me plug in another graphics card and run CF/SLI.
CF and SLI both are chipset-specific technologies that rely on having the proper bits set in the GPUs. We don't have this information, and nobody's stepped up to reverse-engineer it.
Make that video hardware accelerated on shaders or fixed function.
Let me move all those to different screens.
...on different graphics cards.
So. All the 'ks are Xorg stuff; everything else is in Mesa/Gallium. Just a bit of perspective. Also, kudos for having the lobotomy necessary to watch a movie and play a game at the same time.
Originally posted by 89c51 View Postwill the new implementation of X (X12) make the codebase cleaner, smaller, easier to comprehend and easier for new people to jump in the development process??
Originally posted by RealNC View Postwill X12 provide an API as easy to code for as OS X Cocoa? Last time I looked into the X API, I couldn't eat for 2 days
Have you considered using GTK, Qt, FLTK, Tk, or any other toolkit? They learned Xlib, so that you don't have to.
Of course, if you absolutely insist on programming directly, you should switch to XCB. http://xcb.freedesktop.org/
Comment
-
Originally posted by MostAwesomeDude View PostCF and SLI both are chipset-specific technologies that rely on having the proper bits set in the GPUs. We don't have this information, and nobody's stepped up to reverse-engineer it.
Originally posted by mirza View PostI agree but X users complain also on (at least) 2 other big issues:
- Startup time
- Overall responsiveness to user interactions
- Overall system memory usage (crucial benchmark for desktop/mobile)
Not sure where the issues are with responsiveness, but I imagine they are more related to input than output.
AFAIK X doesn't use much memory itself but it does create and manage buffers on behalf of the application. I doubt that would change much with Wayland, with the possible exception of having the memory consumption showing up as part of the application rather than as part of X.Last edited by bridgman; 14 September 2009, 12:05 PM.Test signature
Comment
-
Originally posted by mirza View PostI agree but X users complain also on (at least) 2 other big issues:
- Startup time
Comment
-
Originally posted by RealNC View Post"Remote X" is hopelessly outdated by now anyway. Virtually everyone uses VNC or something NX-based.
While I use VNC for high-latency connections where X doesn't work well, it's a horrible kludge in comparison.
Plain old X over the network is pure suckage today.Last edited by movieman; 14 September 2009, 12:23 PM.
Comment
-
Originally posted by Ex-Cyber View PostIn my experience most "X" startup time is due to some combination of DE loading/init and DRI init. I used to be able to cut my X startup time by 2 to 3 seconds by disabling DRI (or by running a system like OpenBSD where it wasn't supported in the first place); not sure if that's still the case.
Also memory usage is caused by applications wasting the memory. But maybe it would be better if X provide some way to operate with compressed images which should reduce memory usage a lot for large pixmaps. But I don't see much others options in X part to reduce memory usage because most of it is anyway taken caused by applications or drivers (yes. Good example at least was nvidia blobs eating a lot per GL application because of bad compilation options.)
Comment
Comment