Announcement

Collapse
No announcement yet.

Handling Overlays & Input With Wayland's Weston

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Handling Overlays & Input With Wayland's Weston

    Phoronix: Handling Overlays & Input With Wayland's Weston

    This year during the X.Org development track at FOSDEM 2013 were just two talks concerning Wayland. One talk covered input with the Weston reference compositor while the other covered using hardware overlays for Weston...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Audio bad as usually (barely audible voice), not worth the watch unless you really wanna watch it.

    Comment


    • #3
      I liked Ander's presentation, but I think the Overlay architecture is a bit "Monolithic".
      Theres only 4 Overlays with monolithic properties each one.
      I think in the near future they will change that, putting Weston to manage as many Overlays as the userspace needs and maybe a little more flexible.
      Like a stack of Overlays with one overlay is fullscreen but with Sprite behavior and other with some area, maybe 128x32, with cursor behavior. But all very flexible.

      Comment


      • #4
        Originally posted by rxonda View Post
        I liked Ander's presentation, but I think the Overlay architecture is a bit "Monolithic".
        Theres only 4 Overlays with monolithic properties each one.
        I think in the near future they will change that, putting Weston to manage as many Overlays as the userspace needs and maybe a little more flexible.
        Like a stack of Overlays with one overlay is fullscreen but with Sprite behavior and other with some area, maybe 128x32, with cursor behavior. But all very flexible.
        I think you might be missing the fact, that there will be one sprite plane per each hardware overlay, in case you were counting just primary+scanout+sprite+cursor=4. In fact, counting like that is not really accurate, either, because when the scanout plane is in use, the primary plane is not visible and so does not need a hardware resource at that time.

        The number of planes is limited by hardware. If we implemented additional planes by software, we would end up compositing with a renderer, which is exactly the thing we try to avoid by using overlays. Of course, that could still be done, but it's not easy to organize it so that it would be a benefit instead of more overhead. So, adding more planes (or overlays as you say) actually means adding more hardware.

        Also note, that clients (I assume Wayland clients is what you meant by "userspace") know nothing about planes. Everything Ander presented works without explicit client cooperation. The only requirement from clients in the DRM backend case is that they need to use accelerated graphics (openGL, GLESv2, VAAPI(?), ...) so that their buffers can end up in hardware overlays via the sprite planes.

        Comment


        • #5
          Originally posted by pq__ View Post
          I think you might be missing the fact, that there will be one sprite plane per each hardware overlay, in case you were counting just primary+scanout+sprite+cursor=4. In fact, counting like that is not really accurate, either, because when the scanout plane is in use, the primary plane is not visible and so does not need a hardware resource at that time.

          The number of planes is limited by hardware. If we implemented additional planes by software, we would end up compositing with a renderer, which is exactly the thing we try to avoid by using overlays. Of course, that could still be done, but it's not easy to organize it so that it would be a benefit instead of more overhead. So, adding more planes (or overlays as you say) actually means adding more hardware.

          Also note, that clients (I assume Wayland clients is what you meant by "userspace") know nothing about planes. Everything Ander presented works without explicit client cooperation. The only requirement from clients in the DRM backend case is that they need to use accelerated graphics (openGL, GLESv2, VAAPI(?), ...) so that their buffers can end up in hardware overlays via the sprite planes.
          Thanks pq__ for your explanation.
          I was just worry with Wayland getting into X11 mistakes building architectures very hard to change or evolve.
          I have to familiarize myself more with Weston/Wayland stack.

          So, let me see if I get it... The planes will be like a virtual framebuffer (only this mem space could be the main mem or a gpu mem), which each one will be give it to a renderer and that will write on this virtual framebuffer and later submitted to be displayed (some ioctl command or something)?

          Comment


          • #6
            Originally posted by rxonda View Post
            So, let me see if I get it... The planes will be like a virtual framebuffer (only this mem space could be the main mem or a gpu mem), which each one will be give it to a renderer and that will write on this virtual framebuffer and later submitted to be displayed (some ioctl command or something)?
            Roughly yes. However, when we talk about Wayland, all that is completely irrelevant.

            Planes are an internal implementation detail of the Weston compositor, and they are not reflected in the Wayland protocol. We could replace them with something completely different at any time, and client applications would never know. Or put in other way around, applications do not have to be specifically coded to support planes or overlays. All clients will automatically get the benefits of overlays when the compositor chooses to do so.

            Well, while the above is true, there will be a protocol extension that will help to hit the overlay path in more cases: sub-surfaces. The sub-surface extension is still in the works, but the aim is that for instance a video player can put the video in some YUV format into a sub-surface directly, instead of converting it into RGB and combining with window decorations and other application graphics. The compositor will then combine all these pieces of a window together, doing whatever color space conversions it needs, with or without overlays. Not only can the compositor be more efficient in the color conversions and compositing than a client, the compositor may also avoid these operations completely by feeding the video into an overlay, which does all that in dedicated video hardware and with better quality. The sub-surface protocol is in no way tied to planes, planes are just a way to implement a fast path for the sub-surfaces.

            Also, the capability to bypass compositing altogether when just a single fullscreen window (e.g. a game) is visible on an output is part of the overlay handling code in Weston.

            Comment


            • #7
              Thanks pq__,
              That was really enlightened.

              Originally posted by pq__ View Post
              ...instead of converting it into RGB and combining with window decorations and other application graphics. The compositor will then combine all these pieces of a window together, doing whatever color space conversions it needs, with or without overlays. Not only can the compositor be more efficient in the color conversions and compositing than a client, the compositor may also avoid these operations completely by feeding the video into an overlay, which does all that in dedicated video hardware and with better quality...
              So you've just said that the composer will only have to get a streamming from the video card (in case of video accel) and put the result to a sub-surface? So, when it goes to full-screen, the client ask the video card to scale up and the composer changes the target to a sub-surface that fill a fullscreen plane, is it?

              Sorry to bother with so many questions, but the subject is very interest to me

              Comment


              • #8
                Originally posted by rxonda View Post
                So you've just said that the composer will only have to get a streamming from the video card (in case of video accel) and put the result to a sub-surface? So, when it goes to full-screen, the client ask the video card to scale up and the composer changes the target to a sub-surface that fill a fullscreen plane, is it?
                Not exactly.

                In case of video, a client will usually somehow produce a buffer containing some form of YUV color data per video frame. If the video should run in a window, the client can create a main surface for window decorations and GUI, and a sub-surface for the video. If the video should be fullscreen, the client only needs a main surface, and asks the compositor to present it as fullscreen. Then the client just attaches (sends to the compositor) YUV-buffers for the (sub-)surface. The compositor will do color conversion, scaling, and compositing as necessary, or assign the YUV-buffers directly to a hardware overlay if it can.

                Comment


                • #9
                  Thanks pq__ for your explanation and your time

                  Comment

                  Working...
                  X