Announcement

Collapse
No announcement yet.

wlroots Merges Wayland Tearing Control Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Weasel View Post
    In fact, it will increase latency, unless you're very fps bound and can't keep up with the monitor refresh rate, since it will draw a frame ahead and then wait.
    Triple can reduce latency in particular cases. Dynamic triple buffer does make sense with tear free applications. Application what it believes is the next frame but the vsync still appears a while off but the application is not sure if there is enough time to render the next frame or not. Dynamic triple buffer allows application to attempt to render a newer frame even that it has a frame ready for output. If that frame is ready before output then it gets displayed.

    Double buffer you have the frame being displayed and single frame to work on and once that the next frame then you are stuck waiting.

    Triple buffer there are some variations.
    1) frame displayed.
    2) next frame waiting to be displayed.
    3) frame being worked on.

    Some implementations frame being worked on when complete can replace next frame to be displayed then start rendering again.

    Originally posted by Weasel View Post
    hey don't do that, and let's be honest, people who praise triple buffering always do so with Vsync which caps your freaking fps.
    This is not the case there is a particular case that simple to miss. Those wanting triple buffer for Intel graphics in fact need triple with VRR on intel hardware in powersaving mode. All due to how Intel GPU in powersave calculates GPU usage so setting clocks its not counting stalls for buffer changes as usage so if you do a double buffer and wait for a frame swap before starting rendering again you find out self running out of GPU cycles so not completing frames.

    Yes Intel powersave kicks you ass with or without vsync enabled when you don't use triple. When I say kicks ass you can end up with the same frame with vsync and rendered to screen 3 times and with VRR a frame rendering twice with intel when not using triple and in powersave. Intel hardware quirks can be very horrible.

    Intel GPU users who make up 70-80 percent of the PC in existence. Gamers have always said buy some other GPU because Intel GPU is useless as well and never properly researched into why. This intel behavior where in powersave results in intel GPU really need triple buffer to work correctly is not new you find it first with Intel GPUs that were cards back in the 1990s so it over a 2 decades old problem that someone only correctly documented recently.

    Yes increases GPU utilization of triple buffer on Intel GPU does slightly increase clocks in powersave but if you look at number of clock cycles that increases by that is not enough still to be waiting around for a frame swap as you have to when double buffering. Yes another thing people have incorrectly said is fixing the problem the increased clocks when what is fixing the problem is that dynamic triple buffer does not have sitting around waiting when there is GPU cycles to use.

    Yes the Intel GPU powersave logic I am idle for any reason I can lower clock speed is a nasty one. Yes counter to this logic is triple buffer of some form where you can avoid giving the GPU a break from processing so it does not get to be idle unless there is really nothing you can be doing.

    Comment


    • #12
      Originally posted by binarybanana View Post
      Such a bad faith name. No one actually wants to "enable tearing". What people want is to lower input latency which can be achieved by disabling vsync. The tearing is just a side-effect.
      Something fun is hardware vsync is never disabled with modern hardware. Wayland Tearing control allows per window choice on tearing or not.

      Tearing is a side effect of allowing buffer change while output is happening this can be allowed with vsync on.

      There was extension back in the day proposed by Intel graphics developer for X11 that never made it mainline that would have offered the same feature of vsync on and tearing allowed so that some windows on screen could have tearing and other be tearing free.

      Hardware level and the software level when it comes to vsync have not really been in feature alignment for decades now. People have presumed what is a software limitation with vsync is how vsync has to function.

      The concept of being able to update buffers mid sync so generating tearing with vsync enabled is something all modern GPU do in fact support just windows and x11 implementations of vsync in software don't allow this.

      The hard reality here latency is also not linked to vsync at the hardware level. Yes disable vsync to get better performance has been disable the vsync and the locks that prevent you from changing buffers while it being written out.

      GPU locks are able to be per buffer. Now current vsync off on windows and X11 you are turning off the lock on the output buffer to late updating so globally this is not what hardware GPU mandate you do. Instead you can just turn off the lock on the applications buffers on the GPU to late updating and generate the same effect of latency reduction by allowing tearing of course you need software controls that allow this. See problem under windows and X11 there have been no controls to turn off the buffer locks in a correct per application gpu buffer way even that the hardware support doing this.

      Comment


      • #13
        Originally posted by Hans Bull View Post
        Too bad that KWinFT did not gain more traction.
        romangg
        may reply about it..

        Comment


        • #14
          Gamescope (running on tty) has a massive advantage on this feature, because not only you are saving vsync latency, you are also bypassing the desktop compositor. Nvidia published a cool graph about this when they released reflex.

          Obviously all of this in meaningless unless you have your FSP limited to the refresh rate of your monitor. For example for a 60fps monitor you want to get a frametime vaue of 16.7 because 16.7*60=1002 (1second).

          You could go further and consider your monitor latency too. Assuming in the previous example your monitor latency is 4.5ms, your could extract that from your current latency like 16.7-4.5=12.2ms. To achieve that frametimes we can limit the FSP to 84hz so we get 11.9*84=999.6ms which is the closest value to 1000.

          Gaming keyboards and mices have extremenly low latency so we don't actually need to consider them.
          Last edited by Zeioth; 30 September 2023, 08:23 PM.

          Comment


          • #15
            you are also bypassing the desktop compositor
            Note that this is true for any compositor that supports direct scanout of the specific surfaces

            Comment


            • #16
              Originally posted by Quackdoc View Post

              how is it bad faith? you are literally enabling tearing, on a mailbox/triple buffer setup, your latency is 1 frame of whatever the monitors cycle is. the only saved latency is during the tearing. hence enable tearing
              Because the point is not to "enable tearing" (no one likes or wants tearing), but to disable vsync for lower latency.

              oiaohm
              I guess if you consider this from the angle of changing the front buffer to update a window after a flip (while it's being scanned out) it kinda sorta makes sense logically, but it still smells gaslighty to me. No one wants tearing per se. People want rendering and presentation without delays. Maybe the naming is just unfortunate, but considering Wayland dev's opinions on this in the past it seems more like some passive aggressive insult.
              Last edited by binarybanana; 02 October 2023, 04:56 AM.

              Comment


              • #17
                Originally posted by binarybanana View Post

                Because the point is not to "enable tearing" (no one likes or wants tearing), but to disable vsync for lower latency.
                the point IS to enable tearing, you ONLY get the extra latency reduction during the tear (assuming compositor is handling it properly), if the screen isn't tearing IE. VRR solutions, there is no additional latency benefit, the tearing in this case is decidedly the feature, not a detriment.

                Comment


                • #18
                  Originally posted by binarybanana View Post
                  Because the point is not to "enable tearing" (no one likes or wants tearing), but to disable vsync for lower latency.
                  There is a question here I will get to.

                  Originally posted by binarybanana View Post
                  I guess if you consider this from the angle of changing the front buffer to update a window after a flip (while it's being scanned out) it kinda sorta makes sense logically, but it still smells gaslighty to me. No one wants tearing per se. People want rendering and presentation without delays. Maybe the naming is just unfortunate, but considering Wayland dev's opinions on this in the past it seems more like some passive aggressive insult.
                  And you repeat it again about presentation without delays.

                  Wayland developers every frame is perfect idea come from the idea in hardware you really cannot turn off vsync and that GPU power would get that fast render a frame that the delay would not be a problem. Yes GPU got more powerful and we now wanted more complex outputs so making the second part of their plan not work.

                  Lets look at a game screen you have the action and the HUD. You know the HUD displaying information like your health and so on. You disable vsync the older way for lower latency both the HUD rendering and the action can tear. Now under this tearing controls of wayland. The HUD can still be vsynced with no tearing and the action can tear as required.

                  Something people have not noticed is how in particular games you will have a screen capture with a tear line with mouse pointer sitting across the tear line absolutely perfect even that the mouse was moving at the time. Yes the mouse pointer was sitting on buffer that was vsynced even that the game had vsync disabled in lots of these cases.

                  The feature has been in GPU for quite some time to buffers with tearing and buffers without tearing output on the same screen overlay with each other in hardware. Lack of good interface to-do use this feature from the software side.

                  This has been the problem the Wayland developers were right and wrong at the same time.

                  Wayland early developers due some of them being GPU engineers are absolutely right that you should not need to be turning off vsync globally. But they were not right that GPU would get fast enough that tearing would come not required.

                  Also game developers and everyone else also was missing that GPU hardware does have the means to do vsync and not vsynced at the same time by setting flags on GPU buffers and that Direct X, Opengl and Vulkan have not been exposing this functionality.

                  Stupid as it sounds vsync buffer while vsync is off/async for over 20 years has only been used for mouse curses this is a level of Doh. Yes and this level of Doh is what makes arguing to turn vsync off seam so stupid to those who designed GPU hardware and them fail to express what the problem was.

                  There is a very strict reason why in the wayland control tearing protocol its written this way.
                  vsync 0 Tearing-free presentation
                  The content of this surface is meant to be synchronized to the vertical blanking period. This should not result in visible tearing and may result in a delay before a surface commit is presented.
                  async 1 Asynchronous presentation
                  The content of this surface is meant to be presented with minimal latency and tearing is acceptable.

                  Note its "this surface" not this output. Because you are allow to with modern GPU(as in GPU built from 2000 and newer so not that modern) to stack for output buffers/surfaces tagged to be vsync with vsync flag and other buffers/surfaces tagged to not be synced to vsync with async flag and have them all merge at output behaving exactly how you would want. This is not the vsync on and the vsync off(async) people know this is something in the middle that does not end up costing any more GPU time or latency to perform compared to just merging buffers.

                  Yes in GPU hardware there is no sync thing as vsync off. There is only that the buffer is tagged async. Welcome to another problem. People asking we want vsync off and those dealing with low level hardware going what that the thing we fake up using async?????.

                  Yes the idea that you have to be full screen to allow tearing is also another pile of bogus that has never been hardware limitation.

                  Sorry for this being a bit long. But this is just a case there been a mess of miss understandings on both sides.

                  Wayland every frame as perfect as possible would still be a valid goal and that means master on off switch for vsync still makes no sense. Instead we should be taking full advantage of what GPU hardware has been able to-do for 20 years+ now. async buffers mixed with vsynced buffers is not new tech just has been not usable tech due to lack of API/ABI for game and other software developers to use to take advantage of it.

                  Comment


                  • #19
                    Wouldn't it be the best if the option was called:
                    "enable tearing to decrease latency"
                    or even more clear: "vsync off (enable tearing to decrease latency)"

                    It makes the user more informed; the user gets information about all the consequences of his selection.

                    Comment

                    Working...
                    X