Announcement

Collapse
No announcement yet.

Moving On From An X.Org World To Wayland

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by 89c51 View Post
    Daniel on the video said that the network solution they are testing is something similar to VNC right? Weren't they targeting something more advanced??
    You can see for yourself: http://cgit.freedesktop.org/~krh/weston/log/?h=remote

    It's like VNC in that we send the final composed images, rather than a series of rendering commands (gradient here, text here, etc). This usually ends up being cheaper to transfer over the wire, as is true for most things today - even 3D scenes, which were once totally remotable since it was just a series of (not very many) polygons. But unlike VNC, it does smart damage and compression.

    Comment


    • #22
      Originally posted by daniels View Post
      No coding skills to understand something like that.

      Originally posted by daniels View Post

      It's like VNC in that we send the final composed images, rather than a series of rendering commands (gradient here, text here, etc). This usually ends up being cheaper to transfer over the wire, as is true for most things today - even 3D scenes, which were once totally remotable since it was just a series of (not very many) polygons. But unlike VNC, it does smart damage and compression.
      And thanks for the answer.

      Comment


      • #23
        Originally posted by daniels View Post
        You can see for yourself: http://cgit.freedesktop.org/~krh/weston/log/?h=remote

        It's like VNC in that we send the final composed images, rather than a series of rendering commands (gradient here, text here, etc). This usually ends up being cheaper to transfer over the wire, as is true for most things today - even 3D scenes, which were once totally remotable since it was just a series of (not very many) polygons. But unlike VNC, it does smart damage and compression.
        Smart damage + compression?

        Beware of this patent from Microsoft. They've been very active patenting everyting related to RDP.
        A bitmap transfer-based display remoting by a server coupled to a client is described. Specifically, an application executing on the server implements operations to render a portion of a graphical user interface (GUI). The server decomposes corresponding rendering-based command(s) into simple bitmap raster operations commands. The server sends the bitmap-based commands to the client. The client, responsive to receiving the commands, respectively stores and draws bitmaps from an offscreen display surface, as directed by the server, to an onscreen display surface to present the GUI portion to a user. Logic at the client to store and present the GUI portion are independent of any client-implemented display remoting cache management logic. The client operations are also independent of determinations and processing of graphical object semantics beyond bitmap semantics. Such management and semantic determinations and processing are implemented and maintained respectively at and by the server.

        Comment


        • #24
          Originally posted by newwen View Post
          Smart damage + compression?

          Beware of this patent from Microsoft. They've been very active patenting everyting related to RDP.
          http://www.google.es/patents/US82093...G4Dg#v=onepage
          Aside from the fact that patents cover everything you'd ever possibly think of, theirs covers transmitting rendering commands over the wire and then having them rasterised separately. That isn't us.

          Comment


          • #25
            Finally a video that articulates my understanding about the x/wayland situation. Sometimes while reading discussion here at phoronix i start to doubt myself since so many write with such certainty utter crap.

            Good to see Wayland development is on good tracks, and the people designing it semm to really know what they are doing.

            Comment


            • #26
              How does Wayland handle multiple screens in "clone mode" with different subpixel geometries?

              If the client is responsible for antialising and subpixel rendering or some kind of transfor, if you have different kind of monitors connected to your graphics card or some kind of transformation on one of them, the image will be fucked up for one of them.

              Rendering performed by clients should be abstracted from output devices (the way postscript is for printers) and actual rendering should happen on the server.

              There's a reason X11 is complex, and I'm growing less convinced that Wayland is a good solution for Linux graphics.

              Comment


              • #27
                Messed up?

                Originally posted by newwen View Post
                How does Wayland handle multiple screens in "clone mode" with different subpixel geometries?

                If the client is responsible for antialising and subpixel rendering or some kind of transfor, if you have different kind of monitors connected to your graphics card or some kind of transformation on one of them, the image will be fucked up for one of them.

                Rendering performed by clients should be abstracted from output devices (the way postscript is for printers) and actual rendering should happen on the server.

                There's a reason X11 is complex, and I'm growing less convinced that Wayland is a good solution for Linux graphics.

                No, it won't be.
                Context-resolving is mainly happening in the appropriate graphics-drivers, which handle their own context (even of multiple screens and modes).
                It is the task of the compositor to tell the drivers what to do, so the client-sided-implementation makes sense. No one really stops you from writing a lib that makes this handling easy.
                I am sure it would be simpler than the bloatware what the Xorg-Server is in many cases.

                Comment


                • #28
                  Originally posted by frign View Post
                  No, it won't be.
                  Context-resolving is mainly happening in the appropriate graphics-drivers, which handle their own context (even of multiple screens and modes).
                  It is the task of the compositor to tell the drivers what to do, so the client-sided-implementation makes sense. No one really stops you from writing a lib that makes this handling easy.
                  I am sure it would be simpler than the bloatware what the Xorg-Server is in many cases.
                  My point is that clients cannot render sub-pixels correctly to buffers if they don't know what context are they rendering to. I don't know if X server actually renders taking that into account, but ideally, clients could give the server context independent comands (as in postscript) which are then transformed and rendered by the server. Of course, this is not as fast as direct rendering by the client.

                  Comment


                  • #29
                    Sub-Pixel-Rendering

                    Originally posted by newwen View Post
                    My point is that clients cannot render sub-pixels correctly to buffers if they don't know what context are they rendering to. I don't know if X server actually renders taking that into account, but ideally, clients could give the server context independent comands (as in postscript) which are then transformed and rendered by the server. Of course, this is not as fast as direct rendering by the client.
                    I am not completely into the Wayland-spec, but I am certain this is part of it. How did the devs put it? Every frame is perfect, and judging from my tests with GL-applications (like glgears), this works well.

                    Comment


                    • #30
                      Originally posted by frign View Post
                      I am not completely into the Wayland-spec, but I am certain this is part of it. How did the devs put it? Every frame is perfect, and judging from my tests with GL-applications (like glgears), this works well.
                      Yes ever frame is perfect because they control whats in the buffers. If theres something wrong in the buffers then they (or the graphics drivers) fucked up. All Wayland does is take pointers and buffers and display their contents. How they got there, and whats in them (but not WHO put what in there, wayland keeps close tabs on buffer security) doessnt matter to the protocol.

                      And X is complex because they wanted it to be as platform independent as possible, they were writing an operating system ONTOP OF an existing operating system (whatever flavor of unix you ran) That complexity is a bad thing. Wayland has the right idea: the parts that can never break (Wayland) have to be minimal so that one mistake doesnt impact a trillion other things. Wayland is made to get out of the way and anything "complex" (such as multiple GPU's) is "A client problem."

                      If we ever hit a big changeup in the way we do graphics (Optimus) again in the future, it will help to ensure that the protocol isn't the problem. With X + Optimus the protocol WAS, and to an extent IS, the problem. Because instead of cluttering up the protocol we just introduce new libraries, new clients, and they handle the changes. All Wayland wants is pointers and buffers and a display to shove their contents onto.
                      All opinions are my own not those of my employer if you know who they are.

                      Comment

                      Working...
                      X