Announcement

Collapse
No announcement yet.

Wayland Protocols 1.38 Brings System Bell, FIFO & Commit Timing Protocols

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Brittle2
    replied
    Originally posted by MastaG View Post
    Well some of the Gnome devs can be a pain in the ass for sure.
    If you look at the whole Gnome dynamic triple buffering MR.. then this dude named "Michel Dänzer" is always critizizing everything.
    I mean shit always stays unresolved for so fucking long.
    Almost like they purposely don't want shit to progress.

    On the other hand.. KDE 6.2 just hit Fedora and I'm now getting flickering artifacts in my screen that's connected to my Nvidia dGPU (wayland).
    I guess they could be a little more strict when merging new code.
    They can't fix NVIDIA incompetence

    Leave a comment:


  • access
    replied
    Well, you wouldn't know what they know as you admittedly aren't familiar with said projects.

    Leave a comment:


  • access
    replied
    I think I'll listen to the multi-decade Linux graphics stack developer over someone who admittedly has little wayland or gnome knowledge.

    Leave a comment:


  • MrCooper
    replied
    Originally posted by Uiop View Post
    You are actually arguing that many Wayland clients should include some frame-rate detection functionality (which might not be reliable at all).
    No detection needed. The presentation time protocol gives the client the timestamp when its last frame was presented, and the duration of a refresh cycle. From those two values the client can trivially extrapolate future presentation times, same as the compositor itself would.

    I don't believe your claim that compositor has no extra information. It has more direct access to the GPU. It can better estimate the compositing delay and compositing complexity. It knows about all the other connected Wayland clients, which can affect the preferred timing of frames.
    None of that affects the above in any way.

    In triple buffering, do the clients have problems with using too high frame rates and causing too high GPU utilization? I've heard of those problems.
    So far, frame events have been used to avoid that. The FIFO protocol which landed in the 1.38 release addresses some issues with frame events, though note that this may actually result in higher latency if the client doesn't actively minimize it along the lines below.

    In both double and triple buffering, are the clients really achieving minimum possible latency? Without good information on frame timing, they cannot do it.
    TL;DR: They can already.

    The client can keep track of these timestamps for each frame:

    1. When it starts working on the frame.
    2. When it submits the frame to the compositor (calls wl_surface_attach + wl_surface_commit and flushes the display connection).
    3. When the GPU finishes drawing the frame (GL/Vulkan timestamp queries).
    4. When the frame was presented (Presentation time protocol).

    From these, it can estimate / probe:

    5. How long before the presentation time the compositor deadline is.

    And from that determine when it needs to start working on the next frame to achieve minimal latency. (Mutter is doing the moral equivalent of this to minimize its own latency impact)

    A new protocol which tells the client the compositor's prediction of item 5 for the next refresh cycle wouldn't make any difference for items 1-4. Even for item 5, it's still just a prediction, which may turn out wrong. I.e. it's just a minor quantitative difference, not a qualitative one.

    In summary, you're barking up the wrong tree here. You should rather poke client developers to take advantage of the tools already available to them. No Wayland protocol can magically take care of it for them.

    Leave a comment:


  • billyswong
    replied
    Originally posted by Uiop View Post
    First of all, the decision to enable/disable subpixel aliasing has nothing to do with display resolutions (as someone pointed out).
    But, noone pointed out that it is only partially related to "pixel density"; i.e. the eye-to-display distance should also be taken into account.
    Therefore, it is impossible to automatically detect whether "HiDPI" mode should be used, instead, it should be a user's decision.
    If you take "pixel density" only in the logical sense, i.e. how many device pixels are going to be used per each logical / CSS pixels on a webpage, then we can skip both the eye-to-display distance and the device pixel density detection.

    Let users specify the logical pixel density. Let users specify the subpixel orientation of each displays manually. All those "Hardware lies! We can't detect them correctly!" excuses will be moot.

    Leave a comment:


  • access
    replied
    Way to selectively "quote" the issue.

    But the last paragraph of the blog post should be enough explanation. I guess MRs to plumb per component alpha would be considered.
    Last edited by access; 18 October 2024, 02:05 AM.

    Leave a comment:


  • billyswong
    replied
    Also see this closed ticket https://gitlab.gnome.org/GNOME/gtk/-/issues/3787


    Last edited by billyswong; 17 October 2024, 09:39 PM.

    Leave a comment:


  • billyswong
    replied
    Uiop This GTK blog post should be authentic enough for your need

    Leave a comment:


  • MrCooper
    replied
    Originally posted by Uiop View Post
    I'm strongly against such assumptions about what clients should "guess".
    In a good protocol, the client says what it wants to the compositor, and the compositor responds with information requested by the client.
    Deviating from such principles eventually results in a bad protocol.

    So, from my point of view, client requests estimations of future frame times, and then gets those estimates from the compositor.
    The compositor doesn't have more information than the refresh rate / duration of a refresh cycle, all it can do is extrapolate, same as the client. Passing 10 extrapolated values in the protocol doesn't give the client any more information than the duration of a refresh cycle, which is already available. Passing redundant information like that in the protocol is wasteful.

    You should try to de-couple the protocol from your wild guesses about what the user's hardware can do.
    It's not a wild guess, it's how today's displays work.

    Experience shows that trying to anticipate the future in a protocol (or API) tends to be a mistake, because things tend to evolve differently than we expect. A good protocol works well for the real world at present and is extensible for future needs.

    Anyway, if that "presentation protocol" is done right, it should also be able to solve some of Wayland's problems with double buffering and triple buffering also.
    There are no such problems I'm aware of. A Wayland client can use as many buffers as it likes.

    Leave a comment:


  • billyswong
    replied
    Originally posted by Uiop View Post

    About copying macOS desktop: I personally don't like macOS desktop or Gnome Shell, but I guess there is about 20% of people who do (i.e. the users of macOS), so a similar desktop working atop of GNU/Linux is a good way to attract those users.
    Other than that, I don't see how is GNOME copying macOS?

    About HiDPI and the link you have posted: My display is set to "slight subpixel antialiasing", because I use something in-between HiDPI and normal. Given the current desktop resolution sizes, I would guess that most users are similar to me here, with a wide variation. So, it is impossible to have one default setting which fits everyone, and, naturally, there are going to be fights about the default setting for subpixel antialiasing.
    Therefore, I can't take the linked discussion as an evidence of anything extra-ordinary.
    The link provided by Weasel is about grayscale AA being the default. Unfortunately the GNOME developers have pushed forward their arrogance to the point of removing subpixel AA in GTK4. You can no longer choose subpixel AA even if you know what it is and prefer so.

    I agree that with high enough DPI, subpixel AA can retire. But the market share is far from that in 2024. No, the market share won't be there yet even we fast forward to 2034. It takes pixel density of 250%+ (of the logical 96dpi) for subpixel AA stop being useful. Most computer monitors are between 100% and 200%. Monitors in classical DPI range are going to stay for perhaps another 10 to 20 years.

    Because the decision is made in GTK, they are forcing their aesthetics beyond the GNOME DE users. Oh yeah, those arrogant fanboys will call for abolishment of "hobby" DEs. I can hear they coming.

    Leave a comment:

Working...
X