Wayland Protocols 1.38 Brings System Bell, FIFO & Commit Timing Protocols

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • MrCooper
    replied
    Originally posted by Uiop View Post
    1. The compositor should ANNOUNCE future frames (i.e. the deadlines before frame buffer is flipped), to each client separately, so that clients can synchronize their output properly.
    wayland-protocols!276 is a proposal for this.

    At least two frame-flips must be announced in advance (but 10 would be better).
    That makes little sense. With fixed refresh rate, the client can just extrapolate itself (the presentation time protocol provides the duration of a refresh cycle). With VRR, it's not really possible to predict anything but the earliest and latest possible time for the next refresh cycle to start.

    2. The compositor must provide to the clients the timing information about current and past frame flips.
    That's what the presentation time protocol is intended for.​

    Originally posted by Daktyl198 View Post

    It has happened, [...]
    Where? (Looking for direct evidence, not hearsay)

    This sounds dumb, tbh. Compositing a desktop takes VERY little time, so little that I couldn't ever see Wayland's design ever having an issue with getting clients to send a correct reference before the frame cutoff. I run my KDE desktop (which arguably does FAR more compositing with way more bells and whistles than any other Linux desktop environment) at 240hz and have never once noticed input or frame latency on the desktop. Maybe at 480hz+ you might start running into issues... maybe.
    The higher the refresh rate, the less of an issue latency becomes, since a refresh cycle is so short that missing one isn't a big deal. What Uiop mentioned is rather important for lower refresh rates.

    The only applications I could see running into issues hitting frame times are 3D applications like games, almost all of which are run fullscreen and bypassing the compositor entirely.
    There's no such thing as bypassing the compositor, it's always actively involved in presenting client frames, so it incurs non-0 latency (though it can be lower with direct scanout of client frames).​

    Originally posted by Uiop View Post
    I thought Wayland wants "perfect" frames. However it is fine if those frames are so perfectly perfect that they are displayed at the wrong time.
    "Every frame is perfect" isn't about timing or never missing a display refresh cycle (which isn't possible to guarantee, not even by pessimizing latency). It's about allowing the client to ensure its window is always presented in a fully consistent state, never in an inconsistent mix of intermediate states.

    Leave a comment:


  • Daktyl198
    replied
    Originally posted by Uiop View Post
    If the client can determine that it cannot render the frame before the deadline, then it should skip the current frame and start working on the next frame immediately. It shouldn't waste time on frames that are not going to be displayed at the correct moment. Low latency is a simple as that.
    Which is a fine idea in theory... but in reality, clients (applications) render frames so fast that it never comes up. I run KDE on an ancient AMD mobile CPU and there is zero compositing delay or stuttering. In the worst case scenario, a client will render a frame slightly too slow and Wayland will simply discard that frame and wait for the next, but I've never seen it happen in my couple years of running Wayland. And I do notice stuttering and input lag. I'm quite sensitive to it which is why I'm even running Wayland at all. X.org introduced extremely noticeable latency when running my display at 144hz (let alone 240hz) to the point where even though the "FPS" was 144 it was unusable.

    I'm not sure exactly what's causing the stuttering on your display, but I don't think it'll be solved by adding in the feature you are talking about lol. Additionally, traditional applications do not bypass compositing in full-screen because "compositor fullscreen" and "actual fullscreen" are two different things so going fullscreen in a regular application wouldn't help either.

    The one place I can see this being useful is nested compositors, but that usecase is already extremely niche. Still maybe worth doing, but not high on the priority list. As for "hard to augment the wayland protocol" it's really not. You can design a wayland protocol right now and submit it for review. Having the compositor send the clients timing signals would hardly be that crazy of an addition. Why not plead your case on the wayland protocol gitlab instead of on the Phoronix forums?

    Leave a comment:


  • Daktyl198
    replied
    Originally posted by MrCooper View Post
    That has always been the case, no such thing has happened.​​
    It has happened, which is the entire reason the Valve developer feels the need to specify the rules surrounding NACKs.​

    Originally posted by Uiop View Post

    Yes, I wasn't very specific.
    Xorg is not a good comparison point, because it is an 80's design that never concentrated on low latency.

    To the latency specifics:

    1. The compositor should ANNOUNCE future frames (i.e. the deadlines before frame buffer is flipped), to each client separately, so that clients can synchronize their output properly. Those deadlines can be no more than estimates, nevertheless they are crucial. At least two frame-flips must be announced in advance (but 10 would be better). The compositor is allowed to modify the timing estimates at any moment.

    2. The compositor must provide to the clients the timing information about current and past frame flips. For example, did the previous frame arrive too late from the client? More precisely, the compositor communicates the time difference (how much excess time or a lack of time was there when compositing the last composited frame).
    This sounds dumb, tbh. Compositing a desktop takes VERY little time, so little that I couldn't ever see Wayland's design ever having an issue with getting clients to send a correct reference before the frame cutoff. I run my KDE desktop (which arguably does FAR more compositing with way more bells and whistles than any other Linux desktop environment) at 240hz and have never once noticed input or frame latency on the desktop. Maybe at 480hz+ you might start running into issues... maybe. And even then, Wayland has things in place to handle it, it's not like it just panics.

    The only applications I could see running into issues hitting frame times are 3D applications like games, almost all of which are run fullscreen and bypassing the compositor entirely.

    Not saying it shouldn't be added... but I'm saying I doubt it would do much of any good unless you're on extremely ancient hardware trying to run a riced up desktop.

    Leave a comment:


  • access
    replied
    GNOME has keyboard shortcuts for window management too.

    Leave a comment:


  • access
    replied
    Originally posted by Uiop View Post

    2. The macOS concept of window management is overly simplistic; in my opinion, it is targeted to complete noobs who don't have a good understanding of classical window management. It doesn't even support control by keyboard (without mouse).
    Then, less development effort is given to other desktops, because Gnome Shell absorbs most of it (in an unfair way, just because it is the default desktop).
    A bit amusing to call an OS that can trace its roots back to one of the first GUIs to not have classical window management. And you're wrong about macOS not having keyboard shortcuts for window management.

    Leave a comment:


  • MrCooper
    replied
    Originally posted by Uiop View Post
    But, Wayland overall is still quite badly designed regarding timing and latency issues, and these additions won't be enough to provide full support for low-latency scenarios.
    You'd have to be more specific, in general Wayland supports same latency as or better than Xorg though.​

    Originally posted by Daktyl198 View Post
    [...] any Gnome rep can't NACK protocols just because they don't like them,
    That has always been the case, no such thing has happened.​​

    Originally posted by MastaG View Post
    If you look at the whole Gnome dynamic triple buffering MR.. then this dude named "Michel Dänzer" is always critizizing everything.
    Mind the selection bias: Reviewing a proposal is mainly about raising issues and discussing possible solutions. It doesn't mean everything's bad, there's just no need to list everything good in the same way. Everything which isn't raised is assumed good or at least acceptable.

    I mean shit always stays unresolved for so fucking long.
    Almost like they purposely don't want shit to progress.
    This is a common misconception: Raising issues of a proposal and discussing possible solutions is moving the proposal forward. If the proposal just sits there with no activity, it'll never get merged.

    Anyway, I was pushing for merging the triple buffering MR for the 47 release. In the end it was Daniel himself who decided against it. Moving a proposal forward is first and foremost the responsibility of the proposer. I sincerely hope Daniel will be able to get it over the line for 48.

    Originally posted by MastaG View Post
    I know it's complicated and I'm in no shape or form educated on how to implement a certain feature into some Wayland compositor.
    And yet you feel entitled to criticize those of us who are.

    If the GBM spec doesn't say anything about synchronization you can be like:
    The documentation for gbm_surface_lock_front_buffer doesn't say anything about synchronously waiting for the GPU to finish because it's not supposed to do that.

    1. Okay NVIDIA does it differently, lets take it into account and implement it so it works for both Mesa and NVIDIA.

    2. Fuck NVIDIA, if is doesn't behave like Mesa then just flush it down the toilet.
    If I was a driver vendor entering an established ecosystem, I'd first of all try to make my driver work as well as possible​ without any changes to the ecosystem, allowing my users to use the whole ecosystem right away. Only then would I suggest (backwards compatible) changes to the ecosystem to address any pain points for my driver. It's a somewhat similar concept as getting invited to somebody else's place.

    Originally posted by bearoso View Post
    The current mutter triple buffering patch is a hack. The idea is to get weak Intel GPUs to ramp up their clock speed to maintain the max framerate. If performance is sub-refresh rate, when it switches to triple-buffering it tells the driver to draw two frames at once, briefly doubling the workload. This will raise the clock speed, but it's not sustainable. Once the swap queue is full, it returns to drawing one frame again and clocks drops off, creating a microstutter cycle. The more obvious thing to do would be to directly, not indirectly, instruct the GPU to raise the clock speed.
    The rationale about clocks was always dubious, there are other unavoidable reasons why triple buffering is needed in some cases though. Similarly, while the implementation wasn't great early on, it has improved a lot.

    The problem vanvugt is having with NVIDIA seems to be that when turning on explicit sync support, it absolutely refuses to queue extra frames, and instead blocks.
    It's not about explicit sync but about the nvidia driver handling implicit synchronization in an awkward way (in gbm_surface_lock_front_buffer, making it behave like glFinish), which blocks the calling compositor thread (not only with triple buffering, in fact it limits the extent to which the GPU can run ahead of the CPU with triple buffering). Upstream drivers handle this asynchronously between the GL & kernel drivers. In https://gitlab.gnome.org/GNOME/mutte...1#note_2236184 I described a straightforward way to do the same in the nvidia driver.

    Leave a comment:


  • lewhoo
    replied
    Originally posted by MastaG View Post
    Well some of the Gnome devs can be a pain in the ass for sure.
    On the other hand.. KDE 6.2 just hit Fedora and I'm now getting flickering artifacts in my screen that's connected to my Nvidia dGPU (wayland).
    I guess they could be a little more strict when merging new code.
    They indeed could. On the other hand, they fixed or worked around this bug quite quickly - not sure if after I filed it, or work started before. In the bug there is a workaround:

    Leave a comment:


  • access
    replied
    Those are all very subjective points.

    Leave a comment:


  • reba
    replied
    Originally posted by Weasel View Post
    Are you talking about GNOME itself or the devs? Both are bad but need different answers, and this topic was about the devs.

    Their opinions are worth jack shit but they try to force feed them to everyone despite protests and then wonder why "omg why we are so hated?!?"

    Their skill level is mediocre at best (see amount of bugs) and like to pretend like they're more important than they are. Here's one example off the top of my head: https://discourse.gnome.org/t/solved...y-default/1316

    They're delusional just like Apple shills who don't realize how a company with so many rabid fanboys still has such a pissful marketshare everywhere, and yet they take it as a sign that "we should copy macOS". WTF?

    At least Linux has dominant market share on servers and phones.
    Good approach; but I think we can break it down even more:

    - GNOME Desktop is bad (simplistic, opinionated, not customizable, bad quality, no SSD)
    - GNOME Toolkit is bad (libadwaita, no themeing, buttons in the headerbad, ojectively bad UI)
    - GNOME developers are bad (see above)
    - GNOME Foundation is bad (read the latest articles)

    Where's the yang in the yin?

    Leave a comment:


  • Weasel
    replied
    Originally posted by Uiop View Post
    I have noticed that this forum is quite divided over many things, and one of them is whether GNOME is good/bad.
    From my point of view, I like to support diversity, meaning that every user chooses what he likes most. I don't have any particular stance about GNOME.

    So Weasel (or, someone else), could you please explain to me what are the worst of GNOME's transgressions. I would like to evaluate those. Besides explanations, I also accept links to evidence or links to articles that explain the issues.
    Are you talking about GNOME itself or the devs? Both are bad but need different answers, and this topic was about the devs.

    Their opinions are worth jack shit but they try to force feed them to everyone despite protests and then wonder why "omg why we are so hated?!?"

    Their skill level is mediocre at best (see amount of bugs) and like to pretend like they're more important than they are. Here's one example off the top of my head: https://discourse.gnome.org/t/solved...y-default/1316

    They're delusional just like Apple shills who don't realize how a company with so many rabid fanboys still has such a pissful marketshare everywhere, and yet they take it as a sign that "we should copy macOS". WTF?

    At least Linux has dominant market share on servers and phones.

    Leave a comment:

Working...
X