Announcement

Collapse
No announcement yet.

KDE Plasma's KWin Working On Per-Screen Refresh Rates, Compositing From Multiple Threads

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Danny3 View Post
    What I don't get is why stuff needs to be throttled to the lowest common thing ?
    Let's say we have 2 monitors:
    1. 4K 10bit 120 Hz
    2. 2K 8bit 60 Hz
    Why not render everything for the best monitor in 10bit 120 Hz and then downgrade from there the bit depth and refresh rate for the other one ?
    It's hard to understand why the best monitor has to suffer because the other one doesn't have the same capabilities when stuff could've been downgraded for the one(s) that can't display that.
    It doesn't work like that. If one monitor does 144Hz and the other one does 60Hz, how do you convert?
    60Hz is a least common denominator that is guaranteed to be supported by everyone
    Originally posted by curfew View Post
    The author explained that the problem is Kwin is making "legacy" assumptions that aren't valid anymore with modern OpenGL drivers. The other half of the problems is of course X11 limitations that have been inherited to the Wayland implementation as well. Multithreading would be used to run separate compositors for each display. I guess there is no reliable and "easy" way to handle use cases where one display runs at 60 Hz and the second at 75 or even 144 Hz, as there are going to be frames for which timings overlap and also the window between two frame would vary based on the aforementioned overlapping of timings.
    Originally posted by acobar View Post

    There are times when going multi-threading actually simplifies things and I suspect this is one case of it.
    Yes, I'm sure they're not going multi-threading just because. I was just saying the reasons are not apparent going by that post.

    Comment


    • #12
      Originally posted by bug77 View Post
      I doubt it helps any. Light load+multi threading=significant overhead (usually)
      There is overhead but I doubt it's all that significant in this case.
      I don't thing software rasterizing+compositing is something that should be encouraged :P
      You're right, it shouldn't. But if you're using it as a fallback (which is pretty much what it's for) then at least it won't be too terribly slow.

      Comment


      • #13
        Originally posted by Danny3 View Post
        What I don't get is why stuff needs to be throttled to the lowest common thing ?
        Let's say we have 2 monitors:
        1. 4K 10bit 120 Hz
        2. 2K 8bit 60 Hz
        Why not render everything for the best monitor in 10bit 120 Hz and then downgrade from there the bit depth and refresh rate for the other one ?
        It's hard to understand why the best monitor has to suffer because the other one doesn't have the same capabilities when stuff could've been downgraded for the one(s) that can't display that.
        Its a inhered bad design of X11 now that made sense in early X11 and early opengl. Early multi monitor GPUs are not smart enough todo this.

        https://blog.vladzahorodnii.com/2020...ow-and-future/
        One thing that’s worth point out is that buffers are not swapped after finishing a compositing cycle, they are swapped at the start of the next compositing cycle, in other words, at the next vblank

        This is the way it use to be. That buffer swaps were forced aligned to one of the monitors vblank by the old GPUs. So of course you could not do 120hz and downgrade on those old GPUs.

        Originally posted by curfew View Post
        The author explained that the problem is Kwin is making "legacy" assumptions that aren't valid anymore with modern OpenGL drivers. The other half of the problems is of course X11 limitations that have been inherited to the Wayland implementation as well. Multithreading would be used to run separate compositors for each display. I guess there is no reliable and "easy" way to handle use cases where one display runs at 60 Hz and the second at 75 or even 144 Hz, as there are going to be frames for which timings overlap and also the window between two frame would vary based on the aforementioned overlapping of timings.
        From the blog.
        In case the buffer swap operation doesn’t block, which is typically the case with Mesa drivers, glXSwapBuffers() or eglSwapBuffers() will be called at the end of a compositing cycle. There is a catch though. Compositing won’t be synchronized to vblanks.

        The reality with most modern case SwapBuffers call you can do them more than once in a vsync cycle. So a you can do a 120Hz buffer push on a 60Hz/75hz monitor. There will little bit if animation lack of smoothness on the 60Hz/75hz monitor in this cases by the odd frame or two miss aligned this will be normally be low enough that humans generally don't notice too badly. Something make this worse is the fact the 120Hz and 60Hz monitors can both be running at out of alignment clocks and at worse both have slightly drifting clocks so that alignment between them is constantly changing. Really dealing with 120Hz with 60hz monitor is really no worse than dealing with a 120hz with a 75hz monitor due to clock drift making them equal problems.

        Why would you want to avoid 120Hz on a 60hz/75hz monitor where you can.
        1) power saving yes pushing 120Hz worth of buffers use more power than pushing 60Hz. Some of the reason for gsync/freesync.
        2) being able to avoid some of the animation goofs caused by frame miss alignment by doing each monitors in a targeted way.

        Really the way forwards is most likely going to be a mix of allowing particular applications to render way faster than a particular monitor with particular frames just disappearing into the void and other things running exactly aligned to monitor to keep power usage down.

        Again this is a case under X11 we have been doing it wrong. Quick port to Wayland has brought the X11 protocol and hardware history flaws with it. Now its get that crud out.

        Do notice in the blog that delay from user input to when it appears to users could have 2 frames.

        if you press a key on the keyboard, it may take up to two frame before the corresponding symbol shows up on the screen. Same thing with videos, the audio might be playing two frames ahead of what is on the screen.

        This horrible level of mess has been current day X11 desktops. This is why I am sure a little bit of animation error covering from 120hz to 60hz/75hz is not going to be a problem. Why because the error converting 120hz to 60hz/75hz on the fly is going to be no worse than what people have been putting up with all the time with X11. The conversion is always going to be some what mess question is this mess a human noticeable mess.

        With Nvidia now providing keyboard to screen input/output latency measurement tools these extra frames are going to be detected by different end users and complained about as well. Some of these problems now have to be fixed because we now have tools to detect them.

        Comment


        • #14
          Originally posted by sp82 View Post
          For now I would be happy to see lag-free and smooth desktop experience for one monitor, I lost hope in multi monitor support long time ago because of crashed and desktop settings reset every reboot.
          I was also having various screen tearing (even a diagonal tear!) and buggy multimonitor setup on NVidia laptop (Thinkpad T510).
          Then I went for desktop machine with Xeon with an integrated GPU and it completely solved multimonitor problems in KDE/Plazma tough there still was some screen tearing albeit a different one from Nvidia.
          Then I updated my desktop machine with RX580 card and I've had absolutely no problems with multimonitor setup or screen tearing WHATSOEVER in KDE/Plazma and it's been that way for 3 years through various plazma, kde-libs, kde-apps releases.
          That specific GPU has been utterly magical problem solver.

          Note: I've always been on latest LTS kernel.
          Last edited by Grawp; 11 December 2020, 12:41 PM.

          Comment


          • #15
            Hopefully I'll be able to use chromium wayland on plasma without it getting a generic W toolbar at the top. This happens wither I use system or chrome toolbars in the settings.

            Comment


            • #16
              Originally posted by curfew View Post
              The author explained that the problem is Kwin is making "legacy" assumptions that aren't valid anymore with modern OpenGL drivers. The other half of the problems is of course X11 limitations that have been inherited to the Wayland implementation as well. Multithreading would be used to run separate compositors for each display. I guess there is no reliable and "easy" way to handle use cases where one display runs at 60 Hz and the second at 75 or even 144 Hz, as there are going to be frames for which timings overlap and also the window between two frame would vary based on the aforementioned overlapping of timings.
              what kind of limitations wayland inherited from X11 ?

              Comment


              • #17
                Originally posted by Danny3 View Post
                What I don't get is why stuff needs to be throttled to the lowest common thing ?
                Let's say we have 2 monitors:
                1. 4K 10bit 120 Hz
                2. 2K 8bit 60 Hz
                Why not render everything for the best monitor in 10bit 120 Hz and then downgrade from there the bit depth and refresh rate for the other one ?
                It's hard to understand why the best monitor has to suffer because the other one doesn't have the same capabilities when stuff could've been downgraded for the one(s) that can't display that.
                Wouldn't that incur a lot more processing power? Also more resources would be consumed to downsample and convert the the signal. And all this would have to be done while maintaining synchronisation between the monitors

                Comment


                • #18
                  Assuming they fix the multi-monitor issues with heterogeneous refresh rates, I might be convinced to switch to KDE since I have 1 144Hz monitor and 2 60Hz monitors. 144Hz is awesome for games when it works!

                  Comment


                  • #19
                    Originally posted by programmerjake View Post
                    Assuming they fix the multi-monitor issues with heterogeneous refresh rates, I might be convinced to switch to KDE since I have 1 144Hz monitor and 2 60Hz monitors. 144Hz is awesome for games when it works!
                    I've been using Plasma X11 with Kwin-lowlatency to solve the latency and refresh rate issues. That said these patches are definitely welcome as using a fork of kwin that only focuses on X11 is not ideal.

                    Comment


                    • #20
                      Originally posted by Grawp View Post

                      I was also having various screen tearing (even a diagonal tear!) and buggy multimonitor setup on NVidia laptop (Thinkpad T510).
                      Then I went for desktop machine with Xeon with an integrated GPU and it completely solved multimonitor problems in KDE/Plazma tough there still was some screen tearing albeit a different one from Nvidia.
                      Then I updated my desktop machine with RX580 card and I've had absolutely no problems with multimonitor setup or screen tearing WHATSOEVER in KDE/Plazma and it's been that way for 3 years through various plazma, kde-libs, kde-apps releases.
                      That specific GPU has been utterly magical problem solver.

                      Note: I've always been on latest LTS kernel.
                      Nvidia's driver is only capable of tearfree on the 1st monitor, any additional monitors will tear. It's not really a bug so much as a limitation of their Twinview implementation, which has other problems as well...

                      And yeah, same here, I been totally tearfree since like 2010 or so when the AMD open source drivers overhauled their tearfree options for X11. (Their OSS drivers weren't always the most performant drivers, but since then have definitely been the least buggy and the best looking provided you waited to purchase a new card until after its support was good.)
                      Last edited by duby229; 11 December 2020, 05:54 PM.

                      Comment

                      Working...
                      X