Announcement

Collapse
No announcement yet.

If Or When Will X12 Actually Materialize?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #81
    xpra, that was the name.

    Comment


    • #82
      Originally posted by some-guy View Post
      [...] Network support through plain X is not used anymore [...]
      Hmmm... I used it yesterday. Working from at home and logging in to your number cruncher at university via SSH and plotting data via gnuplot's X11 terminal works fine here.

      Comment


      • #83
        Originally posted by siride View Post
        Better would be to just leave it as it is and have the toolkits be done client-side and the rendering server-side. Then you wouldn't have to jump through hoops and have parts of the system on the wrong side of the client/server boundary.

        I see server-side toolkit ideas pop up from time to time when people talk about X. First of all, no other OS uses server-side toolkits, so that should say something right there. Secondly, server-side toolkits introduce policy and application-level details into the server in a non-generic way. Server is supposed to just render and demultiplex input (and interface with hardware to do all that). Now it has to deal with things like buttons, accessibility, colors and themes, etc. Now, instead of being able to just update the Qt libraries, you also have to update the X server. And while you can have side-by-side installations of toolkit libraries, you can't have two instances of the X server running to support old and new apps. No more Qt3 apps alongside Qt4 apps. Either the toolkit API must remain extremely stable and backwards compatible over time, or you just lose the apps that run on the older version of the toolkit. Some people propose some way where toolkit code is uploaded into the server. It's obvious that this is a ridiculous solution and not worth trying.

        So once again, we are left with the reality that X really is okay in its fundamental architecture. Tweaking and upgrading, rather than a wholesale rewrite or reworking is the way to go.
        No other OS has the problem of whether a toolkit should be server- or client- side as no other windowing system has X's network transparency. I agree with you however in saying that twisting the current X11 and toolkits towards this would be absolutely stupid.

        The basic design I propose though, to me at least, seems solid and logical. With a good design, I see no reason why a toolkit API could not be stable, or at least backwards compatible, for a long period of time - we have twenty years of successes and mistakes to look back on.

        Thinking further, this is quite comparable to an analogue of HTML code (= the UI) being pushed from the client (the web server running the app) to the server (the browser running on the user's machine). Data transferred between the client and server is minimal, and UI look is largely in the hands of the server. Clearly I'm not proposing we write all our apps in HTML, but simply that this concept of the toolkit doing everything at the server end is quite feasible.

        Discussions on whether the toolkit would use xlib etc are meaningless as again, bending X11 as it is now towards this goal is a waste of effort. I simply state that the toolkit should handle rendering directly to the output buffer/screen using whatever hardware/acceleration is available to it.

        Comment


        • #84
          EDIT: it seems I wasn't clear enough but I implied that there would be a single toolkit, "the X toolkit". Even a drawing library like Cairo, from what I understand of it, should be run on the server with calls to it coming from over the client-server boundary. I'm not saying "integrate Cairo into X11", I'm saying "what Cairo does should clearly be part of the graphics server". Different toolkits would not need to do silly things like upload themselves as, like in Windows, OS X, current embedded systems, there would be only one toolkit to use! Apps could of course push custom bitmaps and video to the server as need be, much like current stupid Windows shareware apps and manufacturer branded tools insist on abusing to implement their own silly UI styles.

          Comment


          • #85
            Originally posted by Akdor 1154 View Post
            EDIT: it seems I wasn't clear enough but I implied that there would be a single toolkit, "the X toolkit". Even a drawing library like Cairo, from what I understand of it, should be run on the server with calls to it coming from over the client-server boundary. I'm not saying "integrate Cairo into X11", I'm saying "what Cairo does should clearly be part of the graphics server". Different toolkits would not need to do silly things like upload themselves as, like in Windows, OS X, current embedded systems, there would be only one toolkit to use! Apps could of course push custom bitmaps and video to the server as need be, much like current stupid Windows shareware apps and manufacturer branded tools insist on abusing to implement their own silly UI styles.
            Cairo uses Xrender and friends to do the stuff that needs to be accelerated. Putting everything else on the X server would be unnecessary. In fact, it used to be that X did all of that kind of advanced drawing stuff, but nobody used it and it was hard to accelerate correctly because it is too far removed from the application.

            I don't know what you mean by "[d]ifferent toolkits would not need to...upload themselves...like in Windows [or] OS X". They don't upload themselves in Windows or OS X. There is only one toolkit in Windows because they provided a pretty good one by default and the low-level interfaces to the graphics and window management systems aren't meant to be used separately from the standard toolkit. It is all highly integrated from an API standpoint. Of course, you can make your own toolkit and just draw to blank, undecorated windows and do event routing yourself, but few do that because Windows can do all the heavy lifting with the default toolkit. On Linux, it could have been that, say, GTK+ became the standard and all apps would target GTK+ and then we wouldn't be having this argument. But that wouldn't change the architecture in any way.

            Comment


            • #86
              Originally posted by siride View Post
              Cairo uses Xrender and friends to do the stuff that needs to be accelerated. Putting everything else on the X server would be unnecessary. In fact, it used to be that X did all of that kind of advanced drawing stuff, but nobody used it and it was hard to accelerate correctly because it is too far removed from the application.
              Because it was two far removed from the application or because the API was balls? Furthering the HTML analogue, many browsers now are touting, or working on, hardware acceleration for drawing to the screen. In my mind, accelerating drawing functions would seem to be easier under my approach:
              Code:
              [VGA CARD/SCREEN]   <==   [ DISPLAY SERVER ]        <===>         [CLIENT]
                                pixmaps                 high level widget commands
                               primitives                    "draw a button"
                      accelerated video pathways             processed input
                                                    pre-rendered video/images if needed
                              -PCI(E) Bus-                 -Network Transport-
              The client shouldn't have to care what is/isn't accelerated. As long as it uses a well defined API ("draw_button()" "render_video_stream()") all acceleration happens on the server, the same image of a button isn't sent over a network bus over and over again, and the client/server what-happens-where problem is completely avoided. The way things are moving now, where everything seems to be snaking over to being done client-side, we seem on the road to completely negating any benefit native network transparency gives over VNC etc.


              Originally posted by siride View Post
              I don't know what you mean by "[d]ifferent toolkits would not need to...upload themselves...like in Windows [or] OS X".
              Crap phrasing on my behalf, sorry. Perhaps better worded:
              Someone's point of needing to upload toolkits to the server is completely void - under my proposed design, there would be a single UI toolkit that was acted as a bridge to the display server, similar to Windows, Mac OS X, and most embedded systems. Apps could of course push custom bitmaps and video to the server as need be, much like current stupid Windows shareware apps and manufacturer branded tools insist on abusing to implement their own silly UI styles.

              Comment


              • #87
                Originally posted by Akdor 1154 View Post
                Because it was two far removed from the application or because the API was balls? Furthering the HTML analogue, many browsers now are touting, or working on, hardware acceleration for drawing to the screen. In my mind, accelerating drawing functions would seem to be easier under my approach:
                Code:
                [VGA CARD/SCREEN]   <==   [ DISPLAY SERVER ]        <===>         [CLIENT]
                                  pixmaps                 high level widget commands
                                 primitives                    "draw a button"
                        accelerated video pathways             processed input
                                                      pre-rendered video/images if needed
                                -PCI(E) Bus-                 -Network Transport-
                The client shouldn't have to care what is/isn't accelerated. As long as it uses a well defined API ("draw_button()" "render_video_stream()") all acceleration happens on the server, the same image of a button isn't sent over a network bus over and over again, and the client/server what-happens-where problem is completely avoided. The way things are moving now, where everything seems to be snaking over to being done client-side, we seem on the road to completely negating any benefit native network transparency gives over VNC etc.
                Okay, that's all well and good. I agree...let's make sure the server knows about certain operations so that it can accelerate them properly. No disagreements there. But...how does having the server know about buttons or, god forbid, listboxes and gridviews improve performance? Drawing those types of items is always going to require a composition of more primitive drawing operations. It will be no faster to have them done on the server. The cost, of course, is that the client will have to do a great deal more and complex communication with the server to maintain the server-side state of these widgets. How is that going to help with latency? And it doesn't buy anything in terms of graphics performance. So it's a no-win situation, with a lot to lose.

                Your HTML analogy, in fact, bolsters my point, not yours. HTML is primitives: boxes, text areas, very simple controls (that are often no longer used because they aren't flexible enough -- see a trend here?), CSS, etc. Most of the fancy widgets are composite and built out of simpler HTML constructs. Following the HTML model would mean keeping the toolkits client-side, not server-side (assuming we treat the web-browser as the server equivalent for the purposes of discussing how graphics work).

                Crap phrasing on my behalf, sorry. Perhaps better worded:
                Someone's point of needing to upload toolkits to the server is completely void - under my proposed design, there would be a single UI toolkit that was acted as a bridge to the display server, similar to Windows, Mac OS X, and most embedded systems. Apps could of course push custom bitmaps and video to the server as need be, much like current stupid Windows shareware apps and manufacturer branded tools insist on abusing to implement their own silly UI styles.
                Those are owner-draw controls and they don't necessarily just push huge bitmaps to the server. They probably mostly use GDI/GDI+ drawing commands in the same way that native controls do. Windows probably wins here because GDI has traditionally been properly accelerated by the hardware. In X-land, toolkits often have to work around broken graphics drivers that only get a subset of 2D and 3D acceleration correct. What you really need to understand is that Windows doesn't really do it differently from X. The toolkit is entirely client-side and uses the same mechanisms that X uses. Granted, the server has been a kernel-mode subsystem since NT 4.0 (now replaced partially with DWM), but the architecture is effectively the same. Why do you think that is? And again, I ask, how do you propose to deal with control subclassing, event-routing, layout management, etc. on the server-side?

                Comment


                • #88
                  Looky, looky what I found; a Zip file containing a PDF file which outlines the architecture of X.org.

                  Beware because it will blow your mind at how FSCKED UP complex it is:

                  Comment


                  • #89
                    Take that diagram with a grain of salt. It's just a jumble of xorg related keywords and doesn't really describe the actual structure of the xorg stack.

                    Comment


                    • #90
                      Originally posted by V!NCENT View Post
                      Looky, looky what I found; a Zip file containing a PDF file which outlines the architecture of X.org.

                      Beware because it will blow your mind at how FSCKED UP complex it is:
                      http://ubuntuforums.org/showthread.php?t=804842
                      That isn't the X.org architecture. It's a vague concept map that the X.org devs have already called out (on the mailing lists) as being out of date and confusing. It basically just shows some relationships between a vast array of technologies, only some of which are actually X itself. If you drew something like that for Windows or Mac OS X, you'd see similar, if not greater, complexity (especially once you start pulling in things like .NET and COM).

                      The X.org architecture is actually pretty simple, though the devil is in the details. You have the server, which is divided into the DIX (device/driver-independent X) and the DDX (device/driver-dependent X). The DIX provides all the high-level concepts of X and handles requests and routes events. It provides hooks for the DDX to provide specific implementations. The DIX sits in a single folder with a few dozen .c files. Not terrible. Not great, but not terrible. The DDX has a few layers, such as mi (for default, general operations) and then hw for the hardware specific implementation. The drivers hook into the DDX provided by the X server (there is only one now). On the client side, you have XCB (above libxtrans) handling the protocol with Xlib on top providing the old API. Xlib is a mess, but XCB is not, so that problem is solved. And that's about it. Now, you have extensions and ICCCM and window managers and so on, but those all fit into the existing architecture in a reasonable way. The fact that X has been able to be modernized through extensions is a testament to the solidity of the original design.

                      What, in this system, do you think needs to go or needs to be done significantly differently (as opposed to be reworked and streamlined as is currently the development process)?

                      Comment

                      Working...
                      X