Announcement

Collapse
No announcement yet.

Proposed GNOME Patches Would Switch To Triple Buffering When The GPU Is Running Behind

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Rob72 View Post
    Giving the GPU more work to do, to get it to ramp up the clock frequency seems to me the wrong solution to the problem.

    Either the Intel driver needs to be tuned to ramp up the clock earlier, or otherwise there should be some mechanism for Gnome to signal to the driver it needs more speed.

    Kirk: Scotty, I need more power!
    Scotty: I'm giving it all she's got, captain!
    It is possible to increase frequency https://manpages.debian.org/stretch/...ency.1.en.html

    Comment


    • #12
      Originally posted by Serafean View Post

      ... If the GPU can't keep up rendering two frames ahead, make it render 3 frames ahead. If it can't keep up even then, start dropping frames.
      FFS It is a window manager not a first person shooter ...

      Comment


      • #13
        Originally posted by caligula View Post
        Triple buffering should provide better / more consistent performance / latencies overall without increasing the computational load. It just requires a bit more memory for the extra buffer.
        Triple buffering is basically rendering all the time and showing only the (theoretically) latest drawn frame. It increases the load on the system as much as turning off vsync does. While it, in the case of gpu being slow, will provide more fps, the result will be jittery and the latency has a great chance of being bigger. Think about the timings, it should be clearer after a while.

        https://www.youtube.com/watch?v=seyAzw9zEoY Tech Focus - V-Sync: What Is It - And Should You Use It?


        Comment


        • #14
          Originally posted by gens View Post

          Triple buffering is basically rendering all the time and showing only the (theoretically) latest drawn frame. It increases the load on the system as much as turning off vsync does. While it, in the case of gpu being slow, will provide more fps, the result will be jittery and the latency has a great chance of being bigger. Think about the timings, it should be clearer after a while.

          https://www.youtube.com/watch?v=seyAzw9zEoY Tech Focus - V-Sync: What Is It - And Should You Use It?

          https://twitter.com/ID_AA_Carmack/st...11153509249025
          Your links are talking about game performance. On desktop vsync is typically turned on and the jitter comes from the fact that the rendering pipeline can't deliver the next frame on time. You don't have the same issues you have with games since you know beforehand how to render the next few frames. On desktop most applications produce deterministic output. In a game the jitter comes from the delays between game state and what's being rendered on screen. For example turning vsync off might be useful.

          Comment


          • #15
            Originally posted by discordian View Post
            I really don't get how you can tax a modern GPU, even if its an IGP, with composing 2D images (basically no overdraw). 4k * 60Hz would be 480MPixel / s, that's early 2000' level of performance.
            I'd understand if some Windows aren't drawn that quickly, but thats' not what this addresses?
            A big blur, implemented somewhat naively, could probably fill up the memory bandwidth of a modern IGPU. That's even without other programs using it.

            The problem they are talking about is that the gpu clocks drop, which is a.. funny problem. Their solution is not good, as expected.
            Idk how they draw things (shadows, blurs, and such), but i suspect they don't use all the tricks of the rendering trade and could reduce their rendering time.

            Comment


            • #16
              tongue in cheek ...why don't we use a game engine to render the desktop

              Comment


              • #17
                Originally posted by CochainComplex View Post
                tongue in cheek ...why don't we use a game engine to render the desktop
                The desktop is usually faster to render and people want a pixel perfect, tear-free, smooth experience with no dropped frames. In advanced game engines the goal, OTOH, is to provide the best approximation of the game state. Modern games can also take advantage of varying refresh rates while on desktop one might want to stick with a constant frame rate for compatibility reasons.

                Comment


                • #18
                  Originally posted by caligula View Post
                  Your links are talking about game performance. On desktop vsync is typically turned on and the jitter comes from the fact that the rendering pipeline can't deliver the next frame on time. You don't have the same issues you have with games since you know beforehand how to render the next few frames. On desktop most applications produce deterministic output. In a game the jitter comes from the delays between game state and what's being rendered on screen. For example turning vsync off might be useful.
                  How do i explain this to you...
                  In some games the jitter comes from the render->simulate->render loop, but not in all and not in most modern games.
                  https://gafferongames.com/post/fix_your_timestep/ Here's the basics.
                  https://www.youtube.com/watch?v=_zpS1p0_L_o More advanced.

                  There are ways to know the hardware timings, and adapt for lower latency.
                  A game might do it, and a wm might do it. A game can look at its past frames and predict how long the rendering will last, a wm can't because something else might burden the gpu (a game is usually fullscreen, and the only one rendering).
                  My suggestion to everybody is to do your best and assume everything conservatively.

                  Think of time.
                  A 60hz display refreshes every 16.66ms.
                  Worst case rendering that could "benefit" from triple buffering is 45fps, that is 22.22ms.
                  Take a pencil or notepad and do some simulating. Like "first frame is dropped, second frame is x ms behind, third frame is x ms behind, fourth frame is dropped", etc. Do it for, idk 20ms, 22.22ms and 25ms.
                  And then you might realize why it's jittery.
                  Then think about how the commands from the cpu get to the gpu, and then... it's a lot of topic and there's a lot of resources for you to read. It's also a bit fun, so i do recommend it.

                  Comment


                  • #19
                    I hope this doesn't end up to be the solution, triple buffering adds so much input lag in my experience.

                    Comment


                    • #20
                      Originally posted by gens View Post
                      How do i explain this to you...
                      In some games..
                      How do I explain this to you - this is NOT about games, desktop != games. You don't need to simulate anything. For example when playing a video, even in realtime video conferencing, somewhat larger latency is acceptable. Playing back existing video files tolerates as much latency as you want. As a result, the rendering does not need to predict anything.

                      Comment

                      Working...
                      X