Announcement

Collapse
No announcement yet.

The Wayland Situation: Facts About X vs. Wayland

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by sobkas View Post
    There ya go, another unfixable bug thanks to backwards compatibility. I don't know Wayland's implementation of Keycodes so I'll let Daniel answer in regards to that, if he wants to.
    All opinions are my own not those of my employer if you know who they are.

    Comment


    • #72
      Not quite I think

      Originally posted by Ericg View Post
      There ya go, another unfixable bug thanks to backwards compatibility. I don't know Wayland's implementation of Keycodes so I'll let Daniel answer in regards to that, if he wants to.
      Couldn't we just send the UTF code per character, of the stuff you type with your wayland enabled keyboard that reads all the keys correctly, to XWayland? Basically your keyboard and keyboard driver is handled by the Wayland API and then XWayland sends your keystrokes, as UTF codes representing each character, to the specific X-Window you want to type to. You get use the all the keys on non-standard/many-key keyboards and you love Wayland all the more for it.

      Same thing with Mir.

      Still love Wayland because it replaces the inefficient middleman that is X-server. No hate on X11 or the possible future X12.

      Comment


      • #73
        Originally posted by sireangelus View Post
        so i do need a freaking EGL to work. So it means endless and countless driver event waking. On my laptop, having kde or xfce with compositing on and no effects makes about 1h15m of battery life. Can you tell me if wayland would provide some mitigation over this issue?
        no, you don't - wayland itself does not require egl or any kind of hardware acceleration, and works perfectly with clients using standard posix shared memory buffers (e.g. most of the weston sample clients, all of gtk+). weston has a pixman (software renderer) backend too.

        Comment


        • #74
          Originally posted by TheBlackCat View Post
          1. How are top-level windows and sub-surface windows kept synchronized, perhaps using flash in a web-browser as an example?
          the top-level subsurface can freeze updates of its children, and also controls the positioning. so in this case, it would simply set the position and then render the scrolled browser content, and these two updates would occur atomically. in the case where the child needs to be involved, the parent would freeze updates for the tree, ask the child to render new content (or resize, or whatever), wait until it did, and then unfreeze the tree which would push parent and child updates simultaneously. flash-in-browser and resizing-video are the exact two cases we designed this for.

          Originally posted by TheBlackCat View Post
          2. What happens when part of a sub-surface window is obscured within a top-level window, such as using the scroll bar to move the flash animation above or below the top of the window?
          right now, this isn't possible, but it's not a subsurface problem per se. we'll be addressing it with an orthogonal clipping and scaling extension, which is also extremely useful for video in particular.

          Originally posted by TheBlackCat View Post
          3. I assume sub-surfaces have to be part of another window, but can they be nested (i.e. a sub-surface window being part of another sub-surface window), or can sub-surface windows only be part of top-level windows?
          well, you can't have a sub-surface without a parent ... else it's just a surface. they can be nested.

          Originally posted by TheBlackCat View Post
          4. Do sub-surface windows have complete control over their own buffer, or can top-level windows manipulate one of its sub-surface window buffers before passing it to the compositor?
          they absolutely have control over their own buffer. the parent has absolutely no influence whatsoever, other than controlling its stacking and positioning. not the contents tho.

          Originally posted by TheBlackCat View Post
          5. Why is the coordinate counter 31 bits? That seems like a strange number.
          it's a signed 32-bit integer.

          Originally posted by TheBlackCat View Post
          6. Is the coordinate counter count the total number of pixels, or the pixel along a particular axis? This isn't clear from the description.
          all co-ordinates are surface-relative; clients never, ever see global co-ordinates.

          Comment


          • #75
            Wayland is an admission to NeXT/Apple that WindowServer was the correct approach way back when, and will allow solutions along the lines of Quartz/Quartz Extreme to occur with X11 on the way out.

            Comment


            • #76
              Originally posted by TheCycoONE View Post
              That doesn't exactly match my understanding. The way I understand it, there are several drawing api's (OpenGL, OpenVG, OpenGL ES, Direct 3d), and then there's another layer (traditionally GLX or AIGLX) below that for creating a context and letting those APIs interface with windows. My understanding, though I may be wrong, is that EGL is meant to replace X's GLX or Apple's CGL with one cross platform standard.
              yep, correct.

              Comment


              • #77
                Maybe it is not clearly understood by me, but from what is described is basically that:
                Wayland manages inputs and pixmaps (pixel perfect images of the windows)
                The client (like Weston) will likely offer the video drivers and the animation.

                So for example in a Gnome-Shell like (I want to make an example that supposedly is ported today to Wayland), what is Mutter? Is it a client for Wayland? Should Mutter contain the video drivers?

                As drivers are fairly often asked (like when NVidia/AMD/whatever) will support Wayland (as client or otherwise), is it any plan to integrate a full stack (which includes let's say Gallium3D/LLVM pipe) as a default "fallback" configuration?

                At last and not at least, to my understanding Wayland uses TCP for all protocol operations. Are there any ways to do zero-copy of the surfaces (like by using sharing memory!?) Is it possible to notify Wayland about a surface stored on video card by the client, and to say to Wayland: here is the texture Id, or something like this, so no real copying to occur between Wayland and client?

                Comment


                • #78
                  Originally posted by ᘜᕟᗃᒟ View Post
                  Couldn't we just send the UTF code per character, of the stuff you type with your wayland enabled keyboard that reads all the keys correctly, to XWayland? Basically your keyboard and keyboard driver is handled by the Wayland API and then XWayland sends your keystrokes, as UTF codes representing each character, to the specific X-Window you want to type to. You get use the all the keys on non-standard/many-key keyboards and you love Wayland all the more for it.
                  the short answer is no.

                  the long answer is that they're two separate problems. you still need a keycode and state-based system if you want to use keycodes at all. having a separate unicode interface is essentially what input methods do, which is why wayland has a text/input-method api which lets clients do this too.

                  either way, wayland fully supports 32-bit keycodes, which is more than enough.

                  Comment


                  • #79
                    Originally posted by daniels View Post
                    the short answer is no.

                    the long answer is that they're two separate problems. you still need a keycode and state-based system if you want to use keycodes at all. having a separate unicode interface is essentially what input methods do, which is why wayland has a text/input-method api which lets clients do this too.

                    either way, wayland fully supports 32-bit keycodes, which is more than enough.
                    Hey look who decided to join the party haha
                    All opinions are my own not those of my employer if you know who they are.

                    Comment


                    • #80
                      Originally posted by Ericg View Post
                      1) Surface and SubSurface windows are kept in sync through the protocol, I WANT to say they are kept in lockstep via the CPU but it could just as easily be a feature of Hardware Overlays.

                      2) If they are handling it the same way they are minimize...they continue to render, this way the exact image is available at all times.

                      3) Unsure.

                      4) Unsure. I know there are security hooks to make sure clients do not mess with eachother's buffers. If SubSurface windows are considered apart of the same client, then yes they could manipulate it. If they are separate, then no.

                      5) I meant to ask Daniel, but I forgot about it. X was an odd number too at 15, I can only assume that they are using the extra bit for something other than actual counting.

                      6) 99% sure its total number of pixels, so X & Y together. I didn't find any information to make me think otherwise.
                      About 5 and 6:
                      They aren't using a signed number are they? That was the first thing that came to mind (obviously) but I can't see how that would be useful.
                      Assuming they are using the same addressing scheme as X, it has to be purely axial, otherwise we would've long ago past the 32k pixel max in X.
                      If it isn't, 2G isn't that much. You can get that now by putting together a wall of 4K tvs (well, 500 of them).

                      Comment

                      Working...
                      X