No announcement yet.

KDE KWin's Move Away From GBM Surfaces

  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by Vistaus View Post

    QT_QPA_PLATFORM=wayland → disable if this gives you issues!
    I'm not on Wayland yet so I'll pass on the 2nd, thank you very much!


    • #72
      Originally posted by Jabberwocky View Post
      What do you think? Should devs have been more lenient on the EGL streams implementation? Should we have asked for more information how could this have been done differently?
      KDE kwin lead developer was fairly lenient on EGLstreams. Could not see how to implement so said Nvidia developer need to. The Nvidia developer tried with Kwin for over a year before having to admit that EGLstreams was basically fundamentally flawed. Yes the EGLstreams code end up removed from kwin with Nvidia blessing.

      Jabberwocky basically even Nvidia admits they got EGLstreams wrong. There is something else to be aware of xnest with X11 has not worked GPU accelerated with Nvidia closed source drivers for complete time Nvidia was trying to push EGLstreams. There have been problems with Nvidia closed source drivers and functionally being missing you should have with X11 that you have with ever other GPU option.

      The move away GBM Surface to a new EGL part is moving to a part that comes out of Mesa development that removes a few race possible conditions while still using GBM stuff in a different way..

      Remember mesa drivers for a very long time before wayland exists had Xnest under X11 working with gpu acceleration. So basic functionality for stacking and keeping acceleration the open source graphics drivers had.

      EGLstreams failure was just the long term result of Nvidia always wanting todo their own independent solution for opengl drivers that was missing features that all the other solutions had. Yes Nvidia was not keeping a score card on what features the open source drivers had that their drivers did not. I would say EGLStreams failure was a wake up call that Nvidia could not expect the Linux world just to take any form of handout and make do. The handout had to work.


      • #73
        Originally posted by geearf View Post

        I'm not on Wayland yet so I'll pass on the 2nd, thank you very much!
        If you're on Wayland, it's the standard, so it's really not required to set that anyway.


        • #74
          Which version of Kwin will apply this innovation?


          • #75
            Originally posted by jorgepl View Post
            Is there no alternative to KMS/DRM in the kernel for explicit sync? If so, I understand that's an area where Linux falls behind Windows and macOS

            Helper to setup the plane_state fence in case it is not set yet. By using this drivers doesn’t need to worry if the user choose implicit or explicit fencing.
            KMS and DRM are adding the means to do both implicit and explicit fencing.

            Linux kernel graphics drivers drivers include with the Linux kernel are explicit fencing. Its the wrappers like KMS/DRM that implement the implicit fencing so the drivers don't have to. Yes if no implicit sync ioctl or syscall is used then the implicit sync functionality in Linux kernel does nothing.

            Nvidia doing their own thing means they were not behind the Linux kernel provided wrappers. Thing to remember users are not going to update all their applications any time soon. So trying to demand everyone changes to explicit sync instantly completely failed. Also there is a problem with the way explicit sync has been done result in a few cases where there is not performance gain or equal performance in the explicit sync direction instead there is a performance loss. These explicit sync problems still need to be addressed. Its the waiting problem.

            This waiting problem is a classic bad penny problem that keeps on turning up. Someone does some modification that massive reduces the size of waits they implement it claim all kind of gains they poorly benchmark it so they fail to notice they just made a spinlock again. The issue with a spinlock is that you are waiting in userspace instead of going to kernel space to wait where the CPU resources can be reallocated.

            There is a real repeating failure to learn this one from history.

            There is another problem is failure to notice that applications are going to get fencing wrong. Explicit sync will not prevent applications setting up fences for existing hardware behavior that will not remain true into the future.


            • #76
              Originally posted by mdedetrich View Post
              There is no point in discussing anything if you use the assumption that it could theoretically be implemented incorrectly because you can argue whatever you want. Ontop of this you also have it the wrong way around, its far easier to get things wrong with implicit sync then with explicit, and due to this implicit sync is far more brittle to changes for both the OS and the drivers because the entire premise of implicit sync is based on very loose assumptions that can change over time (and in the worst case people rely on that loose behaviour which then prevents you changing behaviour even if its arguably broken).
              This is a failure to learn from history. We have decades with implicit sync we can see a problem. What said application cannot create all its explicit sync fences based on a very loose set of assumptions that in future change to not being true? the answer is nothing.

              Yes application developer will rely on loose behavior this is their nature. They test something it appear to work they ship it they don't care if it works correctly. Applications will do fencing wrong be it implicit or explicit then mandate the OS keeps it the same.

              Originally posted by mdedetrich View Post
              Its more complex than that, there are also other parts of the stack like Wayland/XWayland that need to add support. For example Wayland protocol technically has an explicit sync interface but no DE/compositor implements it because the rest of the graphics stack is primarily implicit sync.

              This is slowly changing though, its already been broadly accepted that the move to explicit sync needs to be done it will just take time.
              No this is a failure to accept the reality because reality is horrible.

              While those doing compositors have the example cases where explicit sync end up eating more CPU time than implicit sync people will keep on doing implicit sync in places. The power efficiency problem sometimes you don't care if you are doing less. The wait problem with explicit sync need to be fixed so that the kernel waits instead of userspace.

              Applications needing implicit sync is going to be around for decades they are not going way anytime soon. Application developers still at times write new opengl applications that are implicit sync because its good enough for their use case.

              Yes then you have the failure to learn from our decades of implicit sync. Implicit sync issue of depending on very loose assumptions that can change over time explicit sync has almost exactly the same problem. Just we have not had decades of explicit sync usage yet to make this horrible reality clear to us.

              mdedetrich what is special about explicit sync fence that prevent application developer presuming it protects something it does not we have seen this with implicit sync. Yes at some point sync behavior with explicit sync will have to be tweaked on a per application base just like what has to happen with old implicit sync applications so they perform better. This is not if this problem will happen it simply when.


              • #77
                Originally posted by Berniyh View Post
                Introducing Vulkan rendering does not mean that OpenGL will be dropped. Why does it have to be one or the other?
                afaik, Vulkan support is planned with Plasma 6, but I doubt it's high priority. OpenGL works, after all.
                patrick1946​ said move, not use vulkan as an aditional renderer. AFAIK, moving implies to drop the older.

                Originally posted by Berniyh View Post
                Also, isn't it always that users with older GPUs complain that Plasma/kwin is too resource-heavy anyway.
                So supporting those is just so that they can start up the thing to complain? :P

                Depends on how older or powerfull they are, a Radeon HD 6870 doesn't support vulkan but can run Plasma with no problems.


                • #78
                  Originally posted by shmerl View Post

                  Supporting ancient GPUs is not worth keeping everything else back. Having some legacy branch for them is good enough for those who need it for some reason.
                  That argument is for 20 years old hardware that were recently droped that can't run anything modern. 10 years old Hardware isn't that old, even Radeon R600 GPUs where moved to the NIR backend in sep, 2022 and they are GPUs from 2007.


                  • #79
                    Originally posted by ranixon View Post

                    That argument is for 20 years old hardware that were recently droped that can't run anything modern. 10 years old Hardware isn't that old, even Radeon R600 GPUs where moved to the NIR backend in sep, 2022 and they are GPUs from 2007.
                    I'd say anything that can't run Vulkan today is old enough not to make it hold back the general progress, but rather threat them as legacy case. And as above, it's not like KDE plan to completely drop it right away. They can keep such kind of path for a while as a legacy one, at least as long as it won't become a big burden.
                    Last edited by shmerl; 15 March 2023, 09:38 PM.


                    • #80
                      Originally posted by Berniyh View Post
                      If you're on Wayland, it's the standard, so it's really not required to set that anyway.
                      Oh I see thank you!