Announcement

Collapse
No announcement yet.

Wayland Support For Pinging, Fading Clients

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Wayland Support For Pinging, Fading Clients

    Phoronix: Wayland Support For Pinging, Fading Clients

    Patches were published today that introduce pinging support for Wayland clients, in an attempt to determine if a client is dead or alive. Should a client not respond to the ping request, the Wayland client's surface is faded-out...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    As to features, on window$ 7 a new video driver installs and starts running without having to reboot or re-login. I wonder how window$ 7 achieved that and if Wayland is planning such a feature as well?

    Comment


    • #3
      Originally posted by cl333r View Post
      As to features, on window$ 7 a new video driver installs and starts running without having to reboot or re-login. I wonder how window$ 7 achieved that and if Wayland is planning such a feature as well?
      This is unrelated to Wayland, as far as I know. In Windows, this is thanks to the Windows Display Driver Model that was introduced in Vista. The Linux equivalent is Direct Rendering Infrastructure 2 (DRI2). As far as I know this is possible in Linux as well simply by swapping out the user space part of the video driver. But the kernel interfaces change so frequently that it won't work for long if you upgrade your kernel, as far as I know. But this would only work with the open source drivers. The proprietary drivers implement their own kernel bits.

      When Microsoft says "we have this new video driver interface", Nvidia and AMD answer "OK we'll rebuild our driver for that" before MS can finish they sentence. When the kernel devs say "we have this new video driver interface" Nvidia and AMD answer "no" before the devs have finished. That's the privilege of having 99% market share.

      Comment


      • #4
        Also, it seems the ping design needs some changing:
        <soreau> hmm, I wonder if it might be better to only do one ping at a time instead of pinging the client on every input event
        <soreau> i.e. wait for the client to respond before sending another ping
        <fredrikh> soreau: maybe rate limit it as well
        <soreau> fredrikh: wouldn't only issuing one ping at a time automatically rate limit it?
        <soreau> oh I see what you're saying.. just don't have a timeout
        <soreau> well, still would need a max timeout
        <soreau> I would have to rethink it a bit
        <krh> soreau: yeah, only having one outstanding ping at a time makes sense
        <krh> and should make the code simpler too
        <soreau> yea it probably should just have a single timer on shell_surface instead of a list I guess
        <krh> soreau: yup

        Comment


        • #5
          Originally posted by runeks View Post
          When Microsoft says "we have this new video driver interface", Nvidia and AMD answer "OK we'll rebuild our driver for that" before MS can finish they sentence.
          Hardly. Microsoft actively engaged the hardware manufacturers, and the design and implementation of WDDM and the creation of the new drivers was a years-long process that involved a good deal of back-and-forth. The same is true for the Direct3D releases.

          The FOSS community does not engage with the hardware manufacturers as much; one can of course easily argue that this is the hardware manufacturers' faults.

          The Linux kernel community actively opposes the kinds of the things that WDDM does, as Windows NT is a hybrid micro-kernel design which WDDM takes full advantage of. It allows the entire driver (kernel portion too) to be removed, installed, or upgraded on a running system. It even allows the kernel portion to crash and be restarted, which happens more often than one would like (even on Linux; the video drivers cause an awful lot of kernel oopses compared to other drivers).

          The Linux equivalent is Direct Rendering Infrastructure 2 (DRI2). As far as I know this is possible in Linux as well simply by swapping out the user space part of the video driver.
          DRI2 is not equivalent to WDDM. There are no direct parallels. The Windows graphics stack is very different from the Linux stack, and there is not any kind of one-to-one mapping between Windows and Linux graphics modules.

          Also note that swapping the driver is not possible on Linux the way it is on Windows, due to the way that the interfaces currently work. Swapping drivers requires the ability to notify an application that its entire graphics context has just been destroyed from underneath of it and that the application must recreate it. X11/GLX is not capable of this at all (well, it is, but only in a very non-ideal manner). Wayland may include the necessary protocol for it; unsure.

          I imagine Wayland will support it though, as this is related to the ability to switch GPUs at runtime, which even Windows still does in a less-than-ideal way. There are several ways of handling dual-GPU set ups. The first involves each application binding to a specific GPU at startup, and disabling that GPU requires closing any applications using it, and both GPUs can be active at once (e.g., Optimus). A second way is to have one GPU active a time and to require all apps to be on one or the other, but this means that the whole system can be locked in low/high mode if any app has a hardware-accelerated graphics context handle. Another way to do it is to have a software driver that fully abstracts the underlying hardware, which means it does CPU-side resource management and disables all hardware-specific GL extensions. One more way is to simply tell applications that the hardware is about to go away and to deal with it, maybe giving them option to request a stay of execution (games in particular would likely just shut down if you unplugged a laptop and the OS tried to force a switch to an integrated GPU that lacked the necessary feature support the game relies on).

          It's really annoying in Windows though when your browser blocks the machine from switching GPUs (since they all use Direct2D these days) and you have to restart Chrome/Firefox/IE after plugging/unplugging your laptop (if your machine's particular implementation of dual-graphics does that). On the other hand, the app-specific Optimus method is also annoying, because it means that the user is constantly having to toggle application profiles, especially again with the browsers if you want to switch between the browser using the low-power GPU and being able to play some of the new HTML5/NaCl 3D games that require the high-end GPU to run nicely.

          A better API would ideally let applications pick the GPU they want using flags like GPU_LOWEST_POWER or GPU_FASTEST or such. Browsers could even pick based on page content, so that the accelerated rendering context can prefer a low-power GPU while a page using WebGL or GL|ES in NaCl can use the fast GPU (assuming the hardware allows both to be active simultaneously). The protocol would also be able to notify apps that the system wants to switch its graphics context and allow it to respond with a yay or nay. The system can then notify the user which application if any blocked the switch so that the user can decide if they want to keep using a particular GPU or restart the application.

          Comment


          • #6
            Originally posted by elanthis View Post
            X11/GLX is not capable of this at all (well, it is, but only in a very non-ideal manner).
            I thought the X side was good and GTK/Qt just never implemented the necessary code. It would certainly be nice if Wayland devs were thinking about this as they designed the new protocol.

            Comment


            • #7
              Originally posted by elanthis View Post
              The Linux kernel community actively opposes the kinds of the things that WDDM does, as Windows NT is a hybrid micro-kernel design which WDDM takes full advantage of. It allows the entire driver (kernel portion too) to be removed, installed, or upgraded on a running system. It even allows the kernel portion to crash and be restarted, which happens more often than one would like (even on Linux; the video drivers cause an awful lot of kernel oopses compared to other drivers).
              Why does the Linux kernel community oppose this? Seems like a pretty useful feature. Does it add too much complexity?

              Comment


              • #8
                anyone knows whats the current situation (or at least what will be when it hits 1.0) with multiGPU gpu switching etc??

                Comment


                • #9
                  I would like to be able to switch GPU's at runtime very much.

                  Also be sure to make things so that I can still move, maximize and minimize the window of the buzzy / unresponsive / hanging application.
                  I wouldn't want it to get in the way of other things and wasting screen space.

                  @elanthis
                  I like your line of reasoning.


                  In many applications it would be nice if things could be restarted if the drivers crash or all the graphic stuff gets removed under it.

                  Comment

                  Working...
                  X