Announcement

Collapse
No announcement yet.

KDE Lands Wayland Fractional Scaling Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Quackdoc
    replied
    Originally posted by piotrj3 View Post
    I would say for true Wayland adaptation we need to more cover normal use cases.

    Fractional scalling was for me one show stopper.

    Another was allowing tearing (for gamers).

    Another is proper Nvidia support (can be resolved 2 ways https://gitlab.freedesktop.org/xorg/...7#note_1694786 )

    Another is HDR.

    Another is high color depth (10bit+) support.
    none of that is the biggest issues, the biggest issues by far are fragmentation, different compositors need different screen capture options, and even the unified options are flaky. Accsessibility tools are often specific to compositors. apps need to implement compositor specific features for valuble features (MPV's stay on top feature for instance) etc.

    the large amount of fragmentation on wayland is absurd, driven by how slow it takes to get protocols adopted into wayland protocol and Portals, with developers of said projects often out right hostile to some of these features for they expose a "security risk". since I guess people can't be trusted with their own systems now.

    Leave a comment:


  • piotrj3
    replied
    I would say for true Wayland adaptation we need to more cover normal use cases.

    Fractional scalling was for me one show stopper.

    Another was allowing tearing (for gamers).

    Another is proper Nvidia support (can be resolved 2 ways https://gitlab.freedesktop.org/xorg/...7#note_1694786 )

    Another is HDR.

    Another is high color depth (10bit+) support.

    Leave a comment:


  • Crion
    replied
    Originally posted by oiaohm View Post
    Yes sfwbar from sway developers support both the old X11 system tray with Xwayland and the new dbus StatusNotifierItem system tray.
    I'm the author of sfwbar and this isn't quite right. sfwbar implements support for StatusNotifierItem, and doesn't support X11 tray (Freedesktop System Tray specification) directly. That said, you can use X11 tray applications with any SNI tray using XembedSNIProxy - a project originating from KDE. The proxy effectively creates an invisible X11 tray and passes through the tray icons to the SNI host. The results usually doesn't look great. X11 tray icons are small bitmaps (some are as small as 8x8 pixels) this is too small for modern HiDPI displays and there really is no good way to scale up an 8x8 pixel bitmap.

    Also, while I originally wrote sfwbar as a taskbar for sway, I'm not a sway developer, and sfwbar should work with a variety of wayland compositors now.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by xfcemint View Post
    I discuss here because I assume that not everyone here, on this forum, is under the same influence of groupthink and confirmation bias (as Wayland developers are).

    I think that my reasons are quite obvious and easy to justify. I also think that what you just wrote is an obviously incorrect argument.
    my design is so simple to "invent"
    Maybe your design is so simple missed critical things.
    1) Amdahl Law. Asynchronous is not optional thing going forwards.. Those making silicon lost the means to keep on doubling clock speed years ago. This made Parallel processing way more important.
    2) ever head the thing single point of failure.

    Wayland compositor in theory can be restarted without stopping applications there is still a lot of finer points before that is reality for normal operations.

    Wayland compositors should be able to be restarted in future. Now globalhotkey to restart Wayland compositor for example you don't want that to go off line just because the Wayland compositor ceases running right? So maybe locally you don't want everything though the same IPC.

    3) time


    This is the biggest problem with making everything 100 percent synced on frames. Yes historically everything milliseconds with input. Yes Windows, Mac OS, X11 all use milliseconds for when input event happened.

    But there is strict difference between local and remote protocol here.

    Notice how arcan just goes and generates new event with new time when sending keyboard to wayland and x11 client. Same applies to mouse you can find it in that same file. arcan A12 mouse/keyboard is lacking some information.

    Arcan A12 protocol simple does not have "mouse movement velocity" information this means particular applications will not work well.

    Where does libinput that most Wayland compositors use get time from for keyboard and mouse events. That right the timestamp the Linux kernel put on the input event when the Linux kernel received the event. The timestamp by the Linux kernel in min effected by the Linux kernel scheduler and cpu clock speeds.

    xfcemint can you see yet that there is a difference between Local and remote desktop yet. Remote due to latency and clock issue normally end up not supporting or support badly any application that needs to use "mouse movement velocity".

    Local since every part can use the same clock you are not needing to build the IPC into single solution. You are not needing single point of failure. Local can use the OS kernel timestamping on everything so able to calculate mouse acceleration that will closely translate to output operations because both are using close to the same clock.

    Not having a unified clock causes work around. Remote protocol there are two common workarounds
    1) this is the most common. Don't support particular classes of applications and be latency sensitive due to the effects on input of not having unified clock for timestamps or some replacement. Arcan is in this camp.
    2) Have a very complex system for expressing when input event happened relative to something else so that velocities of input events do exist this still end up not working in all cases but from existing remote desktop examples of this you get to 98% of the time works as expected. Remember you still have 2% of the time where something will happen the user is not expecting.

    Both of these issue does remember to save/backup regularly(as in like every 5 to 10 mins or less) because at some point the application you are controlling is going to presume you did something different to what you really did with the mouse/pen due to the lost information. Both of these could presume you did something completely bad like it presume you requested to delete your complete work because that where it presumed the mouse pointer was going to end up. Simple one that shows these issues remotely quickly is trying to use applications like krita over a remote desktop connection of any form. .

    Wayland and arcan comes both out of x.org X12 documentation. Wayland decided from the start it was going to be a local protocol and this gives it a lot more freedom on IPC. Arcan decide from the start it was going to be a remote protocol this means it has to be a single IPC connection. X11 protocol over its complete history could not make up it down right mind if it was a remote or local protocol. So you have sections of X11 designed as if it a remote protocol and then sections of X11 designed as if it is local protocol resulting in the worst of both worlds.

    Leave a comment:


  • MorrisS.
    replied
    Originally posted by xfcemint View Post

    That makes no sense since I have already said that Wayland developers are obviously under the influence of groupthink (Wikipedia) and confirmation bias (Wikipedia).

    My design is actually the proof that Wayland developers are under heavy influence of those two psychological phenomena: my design is so simple to "invent" (I did it just by talking to people here on the forum), that there can be no other reasons why they didn't already figure it out, except for the mentioned pshycological phenomena (via a proof by exhaustion, Wikipedia).
    So the non sense is that you discuss about an argument for nothing.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by xfcemint View Post
    Sooo happy that makes me. Kisses and love.
    No this is you did not read.

    "mouse movement velocity" xfcemint with you framebase thing how are you going to get this number. Wayland you have position and time. X11 you have position and time. MacOS you have position and time....

    xfcemint there is a reason why the input device areas have had the highest clock-speeds. It a simple one. Input-rendering-output. At the time of input you have to guess the future for the output because of rendering time latency. Guessing the future is where mouse velocity comes in.

    xfcemint with your protocol idea how are going to handle mouse or pen.

    Remember you do need asynchronous parts to the protocol because not everything need to be synchronous.

    Look at arcan videos not one demos using a drawing program like gimp or krita or running FPS game.

    arcan still has some very serous limitations to overcome itself on input handling caused by not using globaltime.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by xfcemint View Post
    I didn't say that the compositor should provide information on just a single mouse event per frame. Compositor can provide separate info on all the 1000 mousemove events, all in a single frame. A good idea would be to use some coalescing, in order not to send 1000 data packets.

    So, the compositor first collects some events, then it coalesces them, and finaly it can send those events to the clients as a separate mousemove events, if the client requested so.
    coalesces to a point. You have to remember application can be working on the next 2 frames+ into the future. Event happens part rendered frames can need changing. Sooner application knows the sooner it can do that.

    Originally posted by xfcemint View Post
    The compositor decides on the order. The compositor is the authority on time. The order is what the compositor says.
    You need to know what order input events happened in.<<

    I should have been more clear.

    You move a mouse the application is a few bits of data. Location and time. Yes a clock that ms/ns is close to real world is kind important. Calculate speed of mouse movement.

    Computer mouse drivers use to write this up as mickey per movement time. Mickey is after Mickey mouse but it means the smallest measurable unit of the computer mouse.(wacky historic units for you) So mouse would move so many Mickey in the x and y in the time frame defined by the polling hz.

    So xfcemint now mouse movement velocity remember lots of programs use this value for different things. On direct hardware this is simple Mickey distance*hz gives you velocity in mickey per second. Now from this value to screen movement Mickey per second/ output hz. Yes modern day stuff has added dpi correction for mice and so on but the base is the same.

    xfcemint you are saying you got rid of the clock where is your replacement for input. How do you the velocity. Think applications like krita and games need to be able to work out velocity because the produce different output based on the velocity of the mouse/pen movement.

    You need to know the order the input happens and when and what the input is and you do need velocity somehow. Then there is a stack of math on top of that.

    Input has your highest clocks historically..





    Leave a comment:


  • Sho_
    replied
    Originally posted by xfcemint View Post
    In the end, my opinion is that I have clearly won the argument, extremely conclusively. In conclusion, the design of Wayland is sub-par, and needs to be changed.
    This doesn't follow from any of your reasoning :-) For someone quick to link out to logical fallacies on Wikipedia, it's a stunning final leap.

    It's quite rude to accuse me of "attempting to misrepresent your posts" and it makes me realize that you discuss on mainly combative and bad faith terms. Given that you've shown yourself as uninformed and guessing for much of the thread (and often guessing wrong, for that matter) I'd urge you to be more investigative and curious rather than hasteful and proselytizing. You will learn more.

    I regret the amount of time wasted. Back to making software useful for others!

    Leave a comment:


  • oiaohm
    replied
    Originally posted by xfcemint View Post
    OK, I'll try to answer. I have already glanced over the text, and I think that your argument is not very good.

    About your remark above: GPU timestamps (and other hardware timestamps) are irrelevant, since the compositor is the main authority on time. Time is virtualized, there are only frame IDs. Each frame can take multiple minutes, if that's required for debugging.

    That's the answer to irrelevant remark no. 1 from you..
    Not irrelevant remark your answer here shows mistake. Note you said only frame IDs and they can take multiple minutes. Computer mouse 1ms response time. You know those 1000Hz​ mice. That is 1000 location updates per second. Some of your pen input devices are even faster.

    You need to know what order input events happened in. Frame IDs of is not going to work yes that 360hz monitor gives you a frame rate too low to correctly record input you need a faster clock than that.

    Hardware timestamps are a little more important than you would like. You need to know when X image was displayed on screen relative to Y input. So that when a user clicks on Z target on screen that was displayed to them at that time they don't do A action because the window was moved after what was displayed on screen to them.

    There is a need to manage the different latency issues with input.


    Originally posted by xfcemint View Post
    Compositor is synchronized, not single-threaded. The easiest implementation of the compositor is a single-threaded one.

    There would be some opportunities for multi-threading in the compositor, but that is largely irrelevant because the compositor doesn't require much CPU time. The clients are the ones that require CPU time and multi-threading.

    So, the compositor can be single-threaded, and that would be just fine.

    That's the answer to irrelevant remark no. 2 from you.
    https://en.wikipedia.org/wiki/Amdahl...AmdahlsLaw.svg

    No this is your just straight up ignoring Amdahl Law and made a key mistake. As the number of cores increase so does the effect of single threaded sections. Notice you compositor does not use much CPU time. The worst part about a compositor at times it will be dealing using the GPU that is thousands of cores. So single threaded on CPU then not feeding information to GPU fast enough so stalling out GPU performance.

    The reality is the number of cores in CPU and number of cores in GPU are growing at such a rate with the effect what were minor overheads are coming quite large.

    The compositor does not work in isolation. This is also why the hardware clocks are important.

    Originally posted by xfcemint View Post
    Yep, the compositor is not going to wait for the background app. This doesn't affect other clients, because each client has a separate connection and a separate event queue.

    That's the answer. Your question is not completely irrelevant, but it is a minor issue, easily solved.

    That global hotkey one was a trap. You walked straight into. So you are not going to wait and make sure that client is not dead. Remember if the client is dead holding a global shortcut nothing else can use. Yes dbus sent message to application that it has a global short cut this is monitored and detect there is a problem.

    Do you want the primary compositor having to perform watchdog tasks. Yes if the compositor is wasting it processing time on watchdog tasks it going to effect over all performance.

    You just presumed this had a simple solution.

    Originally posted by xfcemint View Post
    ​The problem here is that you are mixing up event time and realtime timestamps. Those two can easily be separated. Event time is completely virtualized.

    Realtime timestamps can be:
    - virtualized: i.e. elapsed time since client connected, but time does not flow while paused for debugging.
    - true time: These are problematic and should be avoided. They represent global state. But this has nothing to do with my protocol design, the problem of true time is unavoidable. One of the methods is to use some jitter on true time, to avoid global state. For example: the granularity of true time is approx.1 frame = 16 ms, with some jitter.

    Completely irrelevant; That's your irrelevant remark no. 3.
    No this is the point you marked as irrelevant remark no.1 problem. 1 frame=16ms that not going to work because that going to be too slow. Multi monitors connected to each other output time can in fact be different. Nvidia Reflex Latency Analyser look it up some time. Yes expect in future monitors to have feedback to computer on how much latency there is exactly.

    Graphics frame rate is one of the slower items the compositor has to deal with. Slower items like the keyboard people commonly thing about. Fast items compositor has to deal with modern mice and drawing tablets. Of course mice and drawing tablets can all be operating on different clocks. I remember when 100hz mice and less was normal. Lots of things have got faster over the years on the input side.

    Linux kernel has chosen CLOCK_MONOTONIC so it fast enough for all hardware input, output... Arcan has choosen a different route here arcan does not bind everything to the frame rate. Frame rate need to be relative to something else like it or not. Yes something else faster.

    Visualized time is a problem works with CLOCK_MONOTONIC because when this clock stops so does all means to put in input also stop. Pausing the clock while debugging and queing up inputs can result in some very horrible outcomes. So you have to get from the hardware timestamps to what ever you are using.

    xfcemint sorry zero right.
    Last edited by oiaohm; 20 December 2022, 12:57 AM.

    Leave a comment:


  • billyswong
    replied
    The discussion has dragged so long. I don't know which floor I shall quote for responding to xfcemint already.

    There are stuff I dislike about Wayland. But a separation of sync and async channel is not part of them. For computing efficiency, the trend since 21st century is putting as many things into async mode as possible. With multi-core CPUs becoming the norm of every computing devices however low power, legacy systems that depend on synchrony are being lamented upon. In Phoronix here, one can see periodic news about how Python is dragging a little bit more of itself out of this technical debt, or how Linux kernel or userspace programs expand utilization of async_io.

    Asynchronous processing or input-output operations are not for "high-priority". They are for anything that don't absolutely require synchrony. Because synchrony requires a global lock, or at least a bottleneck of single-thread. The use of async is not as wide spread as it should only because the dev tools haven't caught up from the single core computer legacy and thus making async programming harder or less familiar to write than traditional programming. That's it.

    Leave a comment:

Working...
X