Announcement

Collapse
No announcement yet.

LatencyFlex v0.1 Released As Drop-In Replacement To NVIDIA Reflex

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • LatencyFlex v0.1 Released As Drop-In Replacement To NVIDIA Reflex

    Phoronix: LatencyFlex v0.1 Released As Drop-In Replacement To NVIDIA Reflex

    Back in January I wrote about LatencyFlex as an open-source, vendor-agnostic alternative to NVIDIA Reflex. This drop-in replacement to NVIDIA's proprietary solution focused on reducing rendering latency is out with its very first release...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Oh, that's amazing. This could give linux a huge edge over Windows in unreal/unity games.

    I wonder though... given this paragraph:

    A conscious design decision in LatencyFleX is that it treats the entire pipeline as a black-box, only assuming that queuing at the bottleneck would be reflected in the delay. It’s a very powerful model that adapts to every kind of bottleneck, not limited to GPUs.
    I wonder if it can ever be made game agnostic? Seems like it doesn't need much information.

    Comment


    • #3
      Originally posted by brucethemoose View Post
      Oh, that's amazing. This could give linux a huge edge over Windows in unreal/unity games.
      Windows users had multiple superior options to choose from for years, most notably RTSS Scanline Sync[0] and probably even more so SpecialK[1]. Both of these give you stable 0 frame latency (next frame), perfectly flat frame times and both do so without tearing or introducing microstutter if your GPU is fast enough. If it isn't you still get 0 frames latency, but obviously not with even frame pacing and slightly higher latency due to waiting for vblank if you want vsync, but input is still processed and shown on screen when the very next frame is drawn, even at 40FPS on a 60Hz monitor.
      SpecialK also takes input timing into account when scheduling frames and gives you a lot of options to fine tune things depending on how much you want to prioritize latency over frame time stability if your GPU can't keep up due to random load spikes or to fine tune latency to some game specific quirks. It even makes naturally stuttery games run smoother than without it at the same time, too. It's like magic.
      I don't want to bash the author's work or anithing, he's working with what is available to him and sadjy on Linux there are some serious limitations in the graphics stack that make it hard to impossible to trul match what these native tools can do, mostly the lack of a real mailbox mode (can't submit and overwrite an already submitted frame, effectively a barrier/queue) and I've heard frame time information is also unrealiable.
      Technically this would be one of the things Wayland could solve much better than X11 due to how the compositor controls rendering by requesting new frames as necessary. If compositors were smart they could take into account the time it takes the client to deliver a frame and also measure how long it takes to process input. If compositors were actually trying you could have 0 frames of latency across the whole desktop for every app for free. But all you get is a fixed delay if you're lucky and AFAIK programs tend to render ahead anyway because the protocol wasn't designed for this. Missed opportunity. /WL rant off

      [0]: https://forums.blurbusters.com/viewtopic.php?t=4916
      [1]: https://wiki.special-k.info/en/SwapChain (enable Fast Sync too, if you want the lowest possible latency)

      Comment


      • #4
        Originally posted by brucethemoose View Post
        I wonder if it can ever be made game agnostic?
        I asked the LatencyFlex author on IRC:
        Code:
        Jan 09 11:39:31 <MrCooper> ishitatsuyuki: I mean, could it be enabled transparently for any application which uses Vulkan?
        Jan 09 11:41:12 <ishitatsuyuki> MrCooper: Unfortunately no. Most application does rendering on another thread from where it polls input and does simualtions, and an opt-in SDK is necessary for applications to properly throttle the main/input thread
        Jan 09 11:41:47 <MrCooper> bummer
        Jan 09 11:43:55 <ishitatsuyuki> Which is why driver-level frame limiters doesn't reduce latency (or add latency compared to an in-game frame limiter)
        Originally posted by binarybanana View Post
        I don't want to bash the author's work or anithing, he's working with what is available to him and sadjy on Linux there are some serious limitations in the graphics stack that make it hard to impossible to trul match what these native tools can do, mostly the lack of a real mailbox mode (can't submit and overwrite an already submitted frame, effectively a barrier/queue)
        Wayland has real mailbox semantics.

        and I've heard frame time information is also unrealiable.
        Should be reliable with upstream Linux kernel & Mesa drivers (would need issue reports otherwise), maybe not with the nvidia driver though.

        Technically this would be one of the things Wayland could solve much better than X11 due to how the compositor controls rendering by requesting new frames as necessary. If compositors were smart they could take into account the time it takes the client to deliver a frame and also measure how long it takes to process input.
        mutter does something like this, there's certainly still room for improvement though.

        Comment

        Working...
        X