Announcement

Collapse
No announcement yet.

Moving On From An X.Org World To Wayland

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by bridgman View Post
    My understanding was that Wayland worked with the radeon and nouveau open source drivers today.
    Judging by his comment about "faster GPU's" im thinking hes restricting himself to the closed source drivers.

    Bridgman, I realize you probably can't say anything even if you did know, but Im gonna ask anyway just in case: any rumors spreading at AMD about FGLRX support for Wayland now that the protocol has been finalized and guaranteed stable?

    Comment


    • #17
      Video not playable in Firefox:
      http://mirror.linux.org.au/linux.con...land_and_X.mp4

      Please add this video to the video tag on the page:
      http://mirror.linux.org.au/linux.con...land_and_X.ogv

      Comment


      • #18
        Originally posted by F i L View Post
        Nice watch, can't wait till Wayland is more supported in the Linux world. I won't be able to use it till AMD or NVidia ports their drivers to it (or Intel suddenly starts making faster graphics chips), so I really hope those companies are looking into supporting it soon.
        It shouldn't be too much work for Nvidia and AMD to port their drivers to support wayland because all the functionality is already there. It's just a matter of changing apis and spliting code into components to support EGL, GLES, DRM and KMS. DRM and KMS modules might be problematic with the binary blobs, though , but they already do it for Android, as far as I know. The GPL remains a concern here, because DRM and KMS propietary modules could be seen as a derivative work of the Linux kernel, whereas the current binary blobs are not (linux support is a derivative work of propietary code not originaly written for Linux)
        Last edited by newwen; 02-07-2013, 06:35 AM.

        Comment


        • #19
          Originally posted by frign View Post
          Ok, I digged in a bit and found out that you can download the video using public-ftp (bypassing the faulty http-handling) with this URL, shouldn't even wget work for you:

          ftp://mirror.linux.org.au/pub/linux....land_and_X.mp4
          Probably more likely would be for X to become a virtual, and virtual/X to be satisfied by either x11-base/xorg-x11 or dev-libs/wayland. That would be less disruptive to the rest of the portage tree. Actually I doubt that the second option for the virtual would really be dev-libs/wayland - rather something else built on top of wayland. Right now "emerge -ptv weston" finds no package, nor anything truly similar, though "emerge wayland" does.

          Comment


          • #20
            Daniel on the video said that the network solution they are testing is something similar to VNC right? Weren't they targeting something more advanced??

            Comment


            • #21
              Originally posted by 89c51 View Post
              Daniel on the video said that the network solution they are testing is something similar to VNC right? Weren't they targeting something more advanced??
              You can see for yourself: http://cgit.freedesktop.org/~krh/weston/log/?h=remote

              It's like VNC in that we send the final composed images, rather than a series of rendering commands (gradient here, text here, etc). This usually ends up being cheaper to transfer over the wire, as is true for most things today - even 3D scenes, which were once totally remotable since it was just a series of (not very many) polygons. But unlike VNC, it does smart damage and compression.

              Comment


              • #22
                Originally posted by daniels View Post
                No coding skills to understand something like that.

                Originally posted by daniels View Post

                It's like VNC in that we send the final composed images, rather than a series of rendering commands (gradient here, text here, etc). This usually ends up being cheaper to transfer over the wire, as is true for most things today - even 3D scenes, which were once totally remotable since it was just a series of (not very many) polygons. But unlike VNC, it does smart damage and compression.
                And thanks for the answer.

                Comment


                • #23
                  Originally posted by daniels View Post
                  You can see for yourself: http://cgit.freedesktop.org/~krh/weston/log/?h=remote

                  It's like VNC in that we send the final composed images, rather than a series of rendering commands (gradient here, text here, etc). This usually ends up being cheaper to transfer over the wire, as is true for most things today - even 3D scenes, which were once totally remotable since it was just a series of (not very many) polygons. But unlike VNC, it does smart damage and compression.
                  Smart damage + compression?

                  Beware of this patent from Microsoft. They've been very active patenting everyting related to RDP.
                  http://www.google.es/patents/US82093...G4Dg#v=onepage

                  Comment


                  • #24
                    Originally posted by newwen View Post
                    Smart damage + compression?

                    Beware of this patent from Microsoft. They've been very active patenting everyting related to RDP.
                    http://www.google.es/patents/US82093...G4Dg#v=onepage
                    Aside from the fact that patents cover everything you'd ever possibly think of, theirs covers transmitting rendering commands over the wire and then having them rasterised separately. That isn't us.

                    Comment


                    • #25
                      Finally a video that articulates my understanding about the x/wayland situation. Sometimes while reading discussion here at phoronix i start to doubt myself since so many write with such certainty utter crap.

                      Good to see Wayland development is on good tracks, and the people designing it semm to really know what they are doing.

                      Comment


                      • #26
                        Originally posted by shmerl View Post
                        Have no problem at playing this in Firefox...
                        What ancient/buggy video plug-in you use?

                        Comment


                        • #27
                          How does Wayland handle multiple screens in "clone mode" with different subpixel geometries?

                          If the client is responsible for antialising and subpixel rendering or some kind of transfor, if you have different kind of monitors connected to your graphics card or some kind of transformation on one of them, the image will be fucked up for one of them.

                          Rendering performed by clients should be abstracted from output devices (the way postscript is for printers) and actual rendering should happen on the server.

                          There's a reason X11 is complex, and I'm growing less convinced that Wayland is a good solution for Linux graphics.

                          Comment


                          • #28
                            Messed up?

                            Originally posted by newwen View Post
                            How does Wayland handle multiple screens in "clone mode" with different subpixel geometries?

                            If the client is responsible for antialising and subpixel rendering or some kind of transfor, if you have different kind of monitors connected to your graphics card or some kind of transformation on one of them, the image will be fucked up for one of them.

                            Rendering performed by clients should be abstracted from output devices (the way postscript is for printers) and actual rendering should happen on the server.

                            There's a reason X11 is complex, and I'm growing less convinced that Wayland is a good solution for Linux graphics.

                            No, it won't be.
                            Context-resolving is mainly happening in the appropriate graphics-drivers, which handle their own context (even of multiple screens and modes).
                            It is the task of the compositor to tell the drivers what to do, so the client-sided-implementation makes sense. No one really stops you from writing a lib that makes this handling easy.
                            I am sure it would be simpler than the bloatware what the Xorg-Server is in many cases.

                            Comment


                            • #29
                              Originally posted by frign View Post
                              No, it won't be.
                              Context-resolving is mainly happening in the appropriate graphics-drivers, which handle their own context (even of multiple screens and modes).
                              It is the task of the compositor to tell the drivers what to do, so the client-sided-implementation makes sense. No one really stops you from writing a lib that makes this handling easy.
                              I am sure it would be simpler than the bloatware what the Xorg-Server is in many cases.
                              My point is that clients cannot render sub-pixels correctly to buffers if they don't know what context are they rendering to. I don't know if X server actually renders taking that into account, but ideally, clients could give the server context independent comands (as in postscript) which are then transformed and rendered by the server. Of course, this is not as fast as direct rendering by the client.

                              Comment


                              • #30
                                Sub-Pixel-Rendering

                                Originally posted by newwen View Post
                                My point is that clients cannot render sub-pixels correctly to buffers if they don't know what context are they rendering to. I don't know if X server actually renders taking that into account, but ideally, clients could give the server context independent comands (as in postscript) which are then transformed and rendered by the server. Of course, this is not as fast as direct rendering by the client.
                                I am not completely into the Wayland-spec, but I am certain this is part of it. How did the devs put it? Every frame is perfect, and judging from my tests with GL-applications (like glgears), this works well.

                                Comment

                                Working...
                                X