Announcement

Collapse
No announcement yet.

Wayland Protocols 1.30 Introduces New Protocol To Allow Screen Tearing

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by Quackdoc View Post

    it does add latency, but only ever 1 frame of latency in the worst case. the latency will always be worse then flipping the currently drawn frame to the screen immediately. but like I said, at worst this means 1 frame of latency
    Actually... yes, guaranteed latency of max 1 frame.

    Drawing with no buffering means the game will draw 1 frame, update the loop, draw next frame, and so on. All the while producing an unstable flickering mess. I have done that in my early attempts on game development, actually. So by definition latency there can be as fast as you can get it.

    With double buffering you are guaranteed a worst case latency of one full screen or ~33.33 ms on 60 Hz - provided the game loop runs that fast.

    With triple buffering you are guaranteed a worst case latency of REFRESH_RATE * (SCREEN_FREQUENCY / YOUR_FPS) ms. That is, if your refresh rate is ~16.66, 60 Hz and 300 FPS, you will get 16.66 * (60 / 300) = 16.66 * (0.2) = ~3 ms when the screen starts drawing. Add whatever it takes to draw a full screen (10 ms?) and...

    Here is a quickly drawn scheduling trace of how things can turn out, let us assume screen is 60 Hz, screen takes 10 ms to draw (SLOOOOOOOW) and game loop draws frames in 6-11 ms (average 8ms) for an average of 91 / 102 / 125 / 167 FPS (0.1% / 1% / avg / max):

    Code:
    1-------2----------------1---------------2----------------1----------------2------ ... Screen buffer
    ​​--------|----------------|---------------|----------------|----------------|------ ... Screen updates
    
    2--------3----------1---------3-----2-------3------1--------3--------2---------3-- ... Ready buffer
    ​|--------|----------|---------|-----|-------|------|--------|--------|---------|-- ... Game updates
    3--------1----------3---------2-----3-------1------3--------2--------3---------1-- ... Active buffer​

    As can be seen here, biggest latency you get in this example is 8 ms before drawing, with triple buffering. But latency does get as low as 4 ms. This gives a real latency of around 14-18 ms with Triple buffering, since the screen blits out in 10 ms. Note that the game updates has irregular periods. In theory the biggest latency this can have is 21 ms.
    Last edited by wertigon; 24 November 2022, 03:56 AM.

    Comment


    • Thanks for all the insight. So yes, triple buffering adds a "controllable" amount of latency (less then vsync!), which is why I think it is the best compromise (it's like AMD enhanced sync on win if I'm not mistaken).

      And do all Wayland compositors use triple buffering? I was under the impression that vsync is the default, and for triple buffering you have to do extra work (like canonical with gnome)

      Comment


      • Originally posted by mdedetrich View Post
        Good to know, I read in the early days when it came out that it did add some negligible input lag
        With tearing, one part of the screen should be updated sooner than the rest. I guess in this area, latency is lower. But what is it worth with the obvious drawbacks.
        Then some devices (mostly TVs?) might screw up VRR latency with their internal processing. But as the test above shows, this shouldn't be a limitation of VRR itself.

        Originally posted by mdedetrich View Post
        but I didn't look further into it. This makes VSync comparatively look even worse then
        At least vsync is a good hardware fps limiter fallback for VRR (also shown by that video above).
        Funny how I enable vsync in any game with VRR due to this, whereas I avoided it at all costs when I had no VRR.

        Someone really needs to have reaction time of a sloth if you can't notice vsync's atrocious lag and irregular frame present when fps drop. I really hope they don't drive cars.

        Comment


        • Originally posted by wertigon View Post

          Actually... yes, guaranteed latency of max 1 frame.

          Drawing with no buffering means the game will draw 1 frame, update the loop, draw next frame, and so on. All the while producing an unstable flickering mess. I have done that in my early attempts on game development, actually. So by definition latency there can be as fast as you can get it.

          With double buffering you are guaranteed a worst case latency of one full screen or ~33.33 ms on 60 Hz - provided the game loop runs that fast.

          With triple buffering you are guaranteed a worst case latency of REFRESH_RATE * (SCREEN_FREQUENCY / YOUR_FPS) ms. That is, if your refresh rate is ~16.66, 60 Hz and 300 FPS, you will get 16.66 * (60 / 300) = 16.66 * (0.2) = ~3 ms when the screen starts drawing. Add whatever it takes to draw a full screen (10 ms?) and...

          Here is a quickly drawn scheduling trace of how things can turn out, let us assume screen is 60 Hz, screen takes 10 ms to draw (SLOOOOOOOW) and game loop draws frames in 6-11 ms (average 8ms) for an average of 91 / 102 / 125 / 167 FPS (0.1% / 1% / avg / max):

          Code:
          1-------2----------------1---------------2----------------1----------------2------ ... Screen buffer
          ​​--------|----------------|---------------|----------------|----------------|------ ... Screen updates
          
          2--------3----------1---------3-----2-------3------1--------3--------2---------3-- ... Ready buffer
          ​|--------|----------|---------|-----|-------|------|--------|--------|---------|-- ... Game updates
          3--------1----------3---------2-----3-------1------3--------2--------3---------1-- ... Active buffer​

          As can be seen here, biggest latency you get in this example is 8 ms before drawing, with triple buffering. But latency does get as low as 4 ms. This gives a real latency of around 14-18 ms with Triple buffering, since the screen blits out in 10 ms. Note that the game updates has irregular periods. In theory the biggest latency this can have is 21 ms.
          This is more complicated.

          There is 2 cases, when frame rate is faster then refresh rate and when framerate is lower then refresh rate.

          First, in orginal Vsync implementation with triple buffering, it actually was a queue of 3 frames and 3rd frame was only produced when queue was opened. So it was producing full latency for producing current frame + 2 * refresh rate. That is why Vsync had so much bad reputation for it.

          What was changed is stuff like Nvidia Fast sync or AMD enhanced sync (and it was something then implemented in some engines) is diffrent idea:

          Where V-Sync works to prevent tearing by waiting to flip the front and back buffers until your monitor is ready for the next frame (a process that can introduce a lot of input lag), Fast Sync works by constantly rendering frames and displaying the most recent frame in time with the monitor’s refresh rate. In other words: V-Sync slows down your computer’s internal framerate to match your monitor’s refresh rate, while Fast Sync accelerates the internal framerate so there’s always a frame available when the monitor’s ready for the next one.
          But this behaviour is only good for games when you don't care about power efficiency. Fast sync/enhanced sync is dropping frames that exceed your refresh rate means you essentially work for frames that aren't even displayed. And this is case of "at worst" additional 1 frame latency. If you like desktop to consume 3 times as much power, it is horrible idea (and in fast this is Nvidia recommendation for lowest latency possible with fast sync, produce 3 times the frames).

          Where is real pain is when you use Vsync with framerate below refresh rate of your monitor. And that point you either have triple buffer with full latency of making 3 buffers, or 2 buffers that non stop swap between full refresh rate or half of refresh rate giving full pain of stutters and inconsistent fluidity. Vsync is not a problem when you exceed refresh rate (and not a problem at all if you exceed it a ton). Vsync is problem when your framerate is sometimes below refresh rate. At that point latency issues and inconsistency hits in.

          Fast sync/enhanced sync is not viable for your laptop or everyday use, or you will face battery life disspearing a lot faster then it should. And in classic vsync yes you have input lag.
          Last edited by piotrj3; 24 November 2022, 05:27 PM.

          Comment


          • Originally posted by Weasel View Post
            Yeah, except they can be literally blind. Some people think 30 fps is fine... until they game at 60 fps for a while.
            You think that none of the people who game at 30fps have experiences 60 fps? 60fps isn't uncommon and was the norm on older systems before 30 or less became the norm for early 30 games.

            I've played Breath of the Wild at 30fps and 60fps. While I prefer 60fps, I've had no issues playing it at 30fps. The original FF7 had it's battle play out at 15 fps while the menus updated at 30 or 60 and it doesn't effect your ability to play it because you aren't directly controlling the characters. While running it at 30 or 60fps would be an improvement to smoothness, it's not going to make any large difference to someones ability to play a game like that. I would be happy for it to run at 24 fps with motion blur.

            For an FPS or anything first person like VR, higher frames have more obvious advantages since there are larger changes in the position of things from frame to frame. Higher frame rates in first person games reduce chances of nausea while improving reaction time.

            Originally posted by Weasel View Post
            Heck, play on a 240hz monitor (with 240 fps, obviously) for a couple days and you won't be able to go back to 60hz because it will feel sluggish and stuttery. And this isn't even counting vsync, I'm even talking 60hz with vsync turned off. If you add vsync on top, oh boy oh boy!
            I'd easily be able to go back to a 60hz display. Trust me lol Hell, my phone is a 90hz display and it's great but it's not a make or break feature.

            Comment


            • Originally posted by CochainComplex View Post
              The ~24-25 fps mark in Movies was also an economical decision. It was enough to have an almost fluent picturestream for your brain but not to an optimum extend. But adding more frames does also mean more roll material. Earlier movies had far less making them appearing "stuttery" but basically its still precieved as a motion.
              Even if there was no economical reason to shoot 24fps over 30 or higher, people would still shoot 24. Back in the early 2000s there were no video cameras that shot 24 fps that were below like $100,000. If you were a low budget film maker and couldn't shoot film, you needed to shoot on videos cameras that might have only shot 60i and people would go through the effort to try to convert that footage to 24p even if they weren't intending to do transfer to film because the motion was preferable. I remember wanting to buy a PAL camera just so I could convert 24fp more easily while getting some additional resolution.

              Higher frame rates tend to make movies look cheaper and more "video like" so you're adding cost, reducing light, and it looks worse. It's also described as "the soap opera effect". I imagine a lot of it has to do with the lower frequency of images being more relaxing and dream-like to our brains while high frequencies keep us more alert. I can imagine a future where films become mixed-frame rate but in most scenarios, 24fps and maybe even slightly lower will still be king.

              Comment


              • Originally posted by piotrj3 View Post

                This is more complicated.

                There is 2 cases, when frame rate is faster then refresh rate and when framerate is lower then refresh rate.

                [...]

                this behaviour is only good for games when you don't care about power efficiency.

                Of course. I was merely pointing out how triple buffering helps reduce latency - When this is desirable. As you point out, making a game go BRRRRRRR does increase the power consumption by quite a bit. Shocking, yes, I know. Low latency is high power consumption and you can choose one or the other.

                If the game / app is slower than the screen refresh rate, then triple buffering will be just as effective as double buffering, Essentially you will never utilize the third buffer. So the only cost in that case is the ~32 MB extra RAM (for 4k) the third buffer occupies on the video card. This increases to 512 MB for 16k, 32k and beyond is so past diminishing returns. Remember that all the Triple buffering happens on the GPU card.

                Comment


                • Regarding power consumption: triple buffering + frame limiter (e.g. libstrangle or gamescope) is a fantastic way to go then.

                  I myself have a 60hz screen, so I usually limit to twice that (120fps) and force triple buffering in x.org.conf to have no tearing, butter smooth gameplay and not to much power draw (and this no coil whining etc). It's like Radeon enhanced sync + Radeon chill on windows

                  Comment


                  • Originally posted by piotrj3 View Post

                    This is more complicated.

                    There is 2 cases, when frame rate is faster then refresh rate and when framerate is lower then refresh rate.

                    First, in orginal Vsync implementation with triple buffering, it actually was a queue of 3 frames and 3rd frame was only produced when queue was opened. So it was producing full latency for producing current frame + 2 * refresh rate. That is why Vsync had so much bad reputation for it
                    Yeah I have no idea why people are claiming that the cost of VSync is just a single frame in latency, its extremely clear its not. There has been so much testing on this, VSync does add non trivial amount of latency. Even the video posted earlier shows just how bad the latency for VSync is.

                    Comment


                    • Originally posted by mdedetrich View Post

                      Yeah I have no idea why people are claiming that the cost of VSync is just a single frame in latency, its extremely clear its not. There has been so much testing on this, VSync does add non trivial amount of latency. Even the video posted earlier shows just how bad the latency for VSync is.
                      Vsync is a broad term, the vsync that wayland forces is triple buffer vsync specifically, which should only ever be a single frame of latency, not sure what the video is specifically talking about, but they are either talking about the typical vsync most games implement, the tested applications were implemented poorly, or the testing methedology is wrong

                      Comment

                      Working...
                      X