Announcement

Collapse
No announcement yet.

Triple Buffering Likely Not Landing Until GNOME 42

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by royce View Post

    It increases latency by an extra frame - at 60Hz this is a whole 16.667ms.
    In this particular case?
    IIRC this could be the case only if every frame has to be displayed, but if the consumer always gets the latest buffer, then there should be no increased latency.
    I don't know this implementation, but it should be possible to do exchange the 2 back buffers with a simple pointer swap that would avoid any overhead introduced by copying buffers.

    Ça va sans dire, correct me if I'm wrong

    Comment


    • #32
      GNOME, the DE that can't be trusted.

      Comment


      • #33
        As far as I know, KDE also utilizes triple buffering.

        I remember doing an experiment a while ago on an nVidia GPU comparing the behavior of Gnome to KDE. With nVidia proprietary driver you can show the performance level of the GPU live. So, I was doing desktop animations on both Gnome and KDE and noticed that with KDE the performance level rises when I launch some animations whereas on Gnome the performance remains at the lowest level. At the time I assumed that Gnome's bottleneck is at the CPU and so the GPU didn't feel like it needs to ramp up the clock. Another thing I noticed with Gnome is that (on Xorg) once you connect an external monitor frame rate deteriorates noticeably, again, without the GPU feeling that it needs to ramp up. To me, this sounds like an issue related to vsync timing and may be caused by the CPU being busy with other things preventing it from catching up with the vsync. Triple buffering should help in this situation, but dedicating a thread for handling animations and writing that thread in optimized C code might be enough. This is just a blind guess though as I haven't looked at the code to make an educated guess.

        Comment


        • #34
          Originally posted by Alexmitter View Post

          A Mali 400 MP2 is enough, this is rather a workaround for a bugged Intel GPU energy management.
          We wouldn't be seeing the same issue when running Gnome on nVidia GPU if it was an Intel GPU bug.

          Comment


          • #35
            Originally posted by JackLilhammers View Post
            Forgive my ignorance, what's inherently wrong with triple buffering?
            In this case, you are getting around the GPU not doing the work on time by... giving it 50% more work to complete in the same time.

            It is a hack designed to force the GPU to a higher power state. it is in a lower power state because it thinks it can complete the work on time at that performance level, but gets it wrong. kind of like I used to with university assignments.

            The ideal solution is for the GPU to be in the correct power state for the work it has been allocated. it is being tricked into this by being given an increased workload. kind of like "omg omg I NEED to hand this assign in tomorrow" only to realise you put an earlier date to your calendar in when you go to hand it in.

            It works but not for the right reasons.

            Comment


            • #36
              Originally posted by You- View Post

              In this case, you are getting around the GPU not doing the work on time by... giving it 50% more work to complete in the same time.

              It is a hack designed to force the GPU to a higher power state. it is in a lower power state because it thinks it can complete the work on time at that performance level, but gets it wrong. kind of like I used to with university assignments.

              The ideal solution is for the GPU to be in the correct power state for the work it has been allocated. it is being tricked into this by being given an increased workload. kind of like "omg omg I NEED to hand this assign in tomorrow" only to realise you put an earlier date to your calendar in when you go to hand it in.

              It works but not for the right reasons.
              Ok, so it's not the triple buffering per se, but the side effect on the gpu.
              Now I'm guessing, an additional buffer has this effect because it prevents the gpu from going to a lower power state by keeping it busy because with the extra buffer it's always rendering a frame?

              Comment


              • #37
                Originally posted by JackLilhammers View Post
                Forgive my ignorance, what's inherently wrong with triple buffering?
                latency

                Comment


                • #38
                  Originally posted by pal666 View Post
                  latency
                  That doesn't explain much.
                  From what I gather, with a second back buffer the gpu draws as fast as it can and the monitor always draws the last complete frame.
                  Worst case scenario happens if a new repaint starts right after one of the buffers has been written.
                  In that situation you'd have to wait the next repaint to draw the frame the was being written before, am I right?
                  That would introduce 1 frame of latency, or 16,7 ms at worst on a 60 hz monitor. On the desktop that shouldn't be that noticeable, or is it?

                  Comment


                  • #39
                    Originally posted by You- View Post

                    In this case, you are getting around the GPU not doing the work on time by... giving it 50% more work to complete in the same time.

                    It is a hack designed to force the GPU to a higher power state. it is in a lower power state because it thinks it can complete the work on time at that performance level, but gets it wrong. kind of like I used to with university assignments.

                    The ideal solution is for the GPU to be in the correct power state for the work it has been allocated. it is being tricked into this by being given an increased workload. kind of like "omg omg I NEED to hand this assign in tomorrow" only to realise you put an earlier date to your calendar in when you go to hand it in.

                    It works but not for the right reasons.
                    What an exquisite description...."kind of like I used to with my university assignments..." That is too funny and so true. That description can also be used to sometimes describe some of things that my (much) better half asks me to do around the house and in the yard. Kind of like the old phrase "I'll do it when I get around to it."

                    Off topic, but where I grew up, you could actually buy little wooden coins (about the size of a two euro coin) that on one side said "Round Tuit" and on the other side said "No Excuses." My wife pulled one of those on me once - I took the hint and stopped gaming and started working on a yard chore she wanted done. It wasn't as fun as gaming, but it involved a chainsaw which is fun in itself.
                    GOD is REAL unless declared as an INTEGER.

                    Comment


                    • #40
                      Originally posted by pal666 View Post
                      latency
                      Corrrectly implemented triple buffering does not increase latency. Only older versions of DirectX had a stupid implementation that required every rendered frame to be displayed, increasing latency when the framerate was over 60. This was never an issue on OpenGL.

                      Comment

                      Working...
                      X