Announcement

Collapse
No announcement yet.

KDE's KWin Adds DMA-Fence Deadline Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Yet another reason why the KDE organization deserves and should get as many donations as possible or having their fundraiser successfully completed:

    In case you weren’t aware, KDE e.V. is doing a membership drive right now, fundraising to strengthen the organization’s financial sustainability. The money goes towards employment for K…


    KDE organization really moves things forward on the Linux desktop and really cares about quality, gaming experience, videos playback esxperience.

    Too bad that the organization who donated 1 million Euros to Gnome organization didn't donated the same amount to KDE too, even half or a quarter or a tenth would've been amazing!

    I bet with enough money KDE can even add a Vulkan back-end and raise the performance and efficiency to yet another level!

    Comment


    • #12
      Why the other vendors' GPUs are immune from this bug?
      Last edited by MorrisS.; 08 December 2023, 12:35 PM.

      Comment


      • #13
        Wayland in a word.

        Comment


        • #14
          Originally posted by chromer View Post
          If Intel is culprit, why don't they fix it upstream?
          The problem seems to be that integrated graphics don't always have enough headroom to meet frametimes when they're basically running idle. This is the proper fix, telling the driver "hey, we need to get this on screen in time"​

          Comment


          • #15
            Accompanied by a screenshot of applications that shouldn't require a screen update (GPU work) at all, unless there is user event...

            Comment


            • #16
              Originally posted by MorrisS. View Post
              Why the other vendor GPUs are immune from this bug?
              Either they guess correctly or actually they aren't.

              Thing is something gives work to GPU. How GPU should know if it is work for low power mode? Or higher power mode? Or work that needs to be prioritized above all other? And thing is they don't most of time. So most of time GPUs need to deploy some kind of scheduler (like is my lower power mode below certain level of utilization if yes then stay, if it is above then go, or for example simply go max clocks, do work and go idle afterwards etc.). But thing is in general GPUs are guessing and they guess badly because they cannot look into command queue to see hey those 2-3 tasks needs to be completed before some task in future (actually they can in Vulkan - vulkan relies on fences A LOT).

              Thing is Linux right now is plagued by a lot of bugs like that. For example Nvidia using NVDEC on linux cannot tell how fast it needs to go, so it goes to higher power mode, making VAAPI over NVDEC kinda inefficient. Thing is most of time GPU makers picked option "let's go power inefficient, instead of making user expierience suck".

              Kinda problem is because implicit sync and old ways of dealing with graphics (opengl way) with one single command queue that ends up with flush to make command empty and creating again new command buffer just ... doesn't work.
              Last edited by piotrj3; 07 December 2023, 07:45 PM.

              Comment


              • #17
                Originally posted by MorrisS. View Post
                Why the other vendor GPUs are immune from this bug?
                I'm guessing they aren't. It just happens Intel has the slowest GPUs (IGPs), so they're more visibly affected.

                Comment


                • #18
                  Here is related commit : https://github.com/KDE/kwin/commit/4...f8bd97a0590f43

                  Comment


                  • #19
                    Is gnome tripple buffering different solution for the same problem? Or these two changes are unrelated?

                    Comment


                    • #20
                      Originally posted by Leinad View Post
                      Is gnome tripple buffering different solution for the same problem? Or these two changes are unrelated?
                      I think triple buffering addresses a different aspect of the same problem. Triple buffering is about rendering things in advance, when you have the GPU power to do so, whereas dma fence deadline support is meant to let you slow down the GPU, but not too much so it can still render what it needs to. I can't be more specific than that, it's been like two decades since I poked my nose in GFX API.

                      Comment

                      Working...
                      X