Announcement

Collapse
No announcement yet.

Proposed GNOME Patches Would Switch To Triple Buffering When The GPU Is Running Behind

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Proposed GNOME Patches Would Switch To Triple Buffering When The GPU Is Running Behind

    Phoronix: Proposed GNOME Patches Would Switch To Triple Buffering When The GPU Is Running Behind

    The latest GNOME performance work being explored is effectively how to make the Intel graphics clock speed ramp up quicker when necessary. Canonical developer Daniel van Vugt is working on a set of patches for enabling triple buffering with Mutter when the GPU starts falling behind and that additional rendering work in turn should ramp up Intel GPUs to their optimal frequency in order to smooth out the performance...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    I really don't get how you can tax a modern GPU, even if its an IGP, with composing 2D images (basically no overdraw). 4k * 60Hz would be 480MPixel / s, that's early 2000' level of performance.
    I'd understand if some Windows aren't drawn that quickly, but thats' not what this addresses?

    Comment


    • #3
      Personally I don't see why composing a 2D desktop, even if it's via an accelerated 3D scene, should take even the lowest GPU running at power saving speeds. That includes both composing the desktop from the 2D window textures (with borders and shadows and effects and all that, and drawing each window itself as composed of individual elements in its own scene (which is a separate task that only needs to be done upon a contents change).

      Comment


      • #4
        Something is wrong with intel drivers or GNOME. Normally base gpu clocks should suffice when running a desktop environment as seen in windows mac os. Especially when you check intel gpu clocks under mac os desktop gpu is %99.999999 percent of the time runs with the base clock and even most major desktop effects won't change that fact. No desktop environment should be overengineered to tax the gpu to run its measly and barely useful desktop effects. That's why some distros push XFCE more than any other I suppose.

        Aside from this (totally my personal opinion) I believe that neither GNOME nor KDE is being developed with the best of the practices. GNOME became the crysis of desktops whereas KDE has never been finalized and I don't believe that I will be able to see a proper stable release where you can upgrade without breaking the whole installation of your OS. Both DEs are overengineered to be the pile of junk that cannot be cleaned enough.

        Comment


        • #5
          Giving the GPU more work to do, to get it to ramp up the clock frequency seems to me the wrong solution to the problem.

          Either the Intel driver needs to be tuned to ramp up the clock earlier, or otherwise there should be some mechanism for Gnome to signal to the driver it needs more speed.

          Kirk: Scotty, I need more power!
          Scotty: I'm giving it all she's got, captain!

          Comment


          • #6
            Originally posted by Rob72 View Post
            Giving the GPU more work to do, to get it to ramp up the clock frequency seems to me the wrong solution to the problem.

            Either the Intel driver needs to be tuned to ramp up the clock earlier, or otherwise there should be some mechanism for Gnome to signal to the driver it needs more speed.

            Kirk: Scotty, I need more power!
            Scotty: I'm giving it all she's got, captain!
            I agree, I've been wondering if any software does something like a useless infinite loop calculation in some background thread, just to increase CPU speed or some nonsense like that. This feels like that kind of bad idea.

            Also, why not just use triple buffering all the time anyway? Android's been doing that since Android 4.2/4.3 and it's worked pretty well so far.

            ​​​​

            Comment


            • #7
              Originally posted by sandy8925 View Post
              Also, why not just use triple buffering all the time anyway? Android's been doing that since Android 4.2/4.3 and it's worked pretty well so far.
              ​​​​
              Triple buffering should provide better / more consistent performance / latencies overall without increasing the computational load. It just requires a bit more memory for the extra buffer.

              Comment


              • #8
                Hmm... Quick research on Wikipedia...

                The original Mac in 1984 had 128k RAM and an 8MHz 16 bit processor.

                ...AND it drew titlebars with minimize icons on its windows.
                Last edited by OneTimeShot; 28 July 2020, 08:29 AM.

                Comment


                • #9
                  Years ago back when I gamed I turned vsync and triple buffering on while turning off everything else since it solved similar stuttering issues. This was on windows with both Nvidia and AMD graphics and with various game engines. I always suspected this is the power management switching frequencies on and off and prioritizing throughput over short moments of latency so I only did it in twitch reaction games like shooters and fighters rather than RPGs...

                  Good to see it wasn't all in my head.

                  Comment


                  • #10
                    Giving the GPU more work to do, to get it to ramp up the clock frequency seems to me the wrong solution to the problem.
                    Definitely sounds this way at first. But when you read the merge request, it actually makes sense. I don't want to repeat it here, go read the description, but the gist of it is: If the GPU can't keep up rendering two frames ahead, make it render 3 frames ahead. If it can't keep up even then, start dropping frames.
                    The interesting question is the design decision, where a late presentation event offsets all future presentation events... (If I understand the comments correctly)

                    What it will mean for power consumption, and whether there are unused optimization opportunities, are different questions.

                    Good to see it wasn't all in my head.
                    c117152 Games push stuff as fast as possible. Vsync limited the frame rate, giving the GPU potentially more time to render the next frame.
                    In all drivers there is a way to disable power management. Did you try that?

                    Comment

                    Working...
                    X