Announcement

Collapse
No announcement yet.

Wayland Protocols 1.38 Brings System Bell, FIFO & Commit Timing Protocols

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Way to selectively "quote" the issue.

    But the last paragraph of the blog post should be enough explanation. I guess MRs to plumb per component alpha would be considered.
    Last edited by access; 18 October 2024, 02:05 AM.

    Comment


    • #42
      Originally posted by Uiop View Post
      First of all, the decision to enable/disable subpixel aliasing has nothing to do with display resolutions (as someone pointed out).
      But, noone pointed out that it is only partially related to "pixel density"; i.e. the eye-to-display distance should also be taken into account.
      Therefore, it is impossible to automatically detect whether "HiDPI" mode should be used, instead, it should be a user's decision.
      If you take "pixel density" only in the logical sense, i.e. how many device pixels are going to be used per each logical / CSS pixels on a webpage, then we can skip both the eye-to-display distance and the device pixel density detection.

      Let users specify the logical pixel density. Let users specify the subpixel orientation of each displays manually. All those "Hardware lies! We can't detect them correctly!" excuses will be moot.

      Comment


      • #43
        Originally posted by Uiop View Post
        You are actually arguing that many Wayland clients should include some frame-rate detection functionality (which might not be reliable at all).
        No detection needed. The presentation time protocol gives the client the timestamp when its last frame was presented, and the duration of a refresh cycle. From those two values the client can trivially extrapolate future presentation times, same as the compositor itself would.

        I don't believe your claim that compositor has no extra information. It has more direct access to the GPU. It can better estimate the compositing delay and compositing complexity. It knows about all the other connected Wayland clients, which can affect the preferred timing of frames.
        None of that affects the above in any way.

        In triple buffering, do the clients have problems with using too high frame rates and causing too high GPU utilization? I've heard of those problems.
        So far, frame events have been used to avoid that. The FIFO protocol which landed in the 1.38 release addresses some issues with frame events, though note that this may actually result in higher latency if the client doesn't actively minimize it along the lines below.

        In both double and triple buffering, are the clients really achieving minimum possible latency? Without good information on frame timing, they cannot do it.
        TL;DR: They can already.

        The client can keep track of these timestamps for each frame:

        1. When it starts working on the frame.
        2. When it submits the frame to the compositor (calls wl_surface_attach + wl_surface_commit and flushes the display connection).
        3. When the GPU finishes drawing the frame (GL/Vulkan timestamp queries).
        4. When the frame was presented (Presentation time protocol).

        From these, it can estimate / probe:

        5. How long before the presentation time the compositor deadline is.

        And from that determine when it needs to start working on the next frame to achieve minimal latency. (Mutter is doing the moral equivalent of this to minimize its own latency impact)

        A new protocol which tells the client the compositor's prediction of item 5 for the next refresh cycle wouldn't make any difference for items 1-4. Even for item 5, it's still just a prediction, which may turn out wrong. I.e. it's just a minor quantitative difference, not a qualitative one.

        In summary, you're barking up the wrong tree here. You should rather poke client developers to take advantage of the tools already available to them. No Wayland protocol can magically take care of it for them.

        Comment


        • #44
          I think I'll listen to the multi-decade Linux graphics stack developer over someone who admittedly has little wayland or gnome knowledge.

          Comment


          • #45
            Well, you wouldn't know what they know as you admittedly aren't familiar with said projects.

            Comment


            • #46
              Originally posted by MastaG View Post
              Well some of the Gnome devs can be a pain in the ass for sure.
              If you look at the whole Gnome dynamic triple buffering MR.. then this dude named "Michel Dänzer" is always critizizing everything.
              I mean shit always stays unresolved for so fucking long.
              Almost like they purposely don't want shit to progress.

              On the other hand.. KDE 6.2 just hit Fedora and I'm now getting flickering artifacts in my screen that's connected to my Nvidia dGPU (wayland).
              I guess they could be a little more strict when merging new code.
              They can't fix NVIDIA incompetence

              Comment

              Working...
              X