If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.
Announcement
Collapse
No announcement yet.
GNOME 45.rc Brings GDM Wayland Multi-Seat, More libadwaita Adoption
Remote login has missed the feature cut off and should be added for gnome 46.
Yep. I really want to be able to tell GDM or whatever to listen on 3389 for incoming sessions, then allow a locally-accelerated Wayland/GNOME sessions from there. Right now I'm using xorgxrdp, and it's sub-optimal.
Being able to use RDP clients is great, and so is being able to wire up Linux boxes for remote sessions using the same Remote Desktop Gateways that Windows systems use.
Speaking of, a Linux Remote Access Gateway that handled SSH, X11-over-SSH, VNC, and RDP would be awesome. I just want one place to pipe things through with good logging and auditable encryption.
Does 'Wayland multi-seat' mean I can build a GNOME 'terminal server' that accelerates graphics locally and shoots the VNC or RDP over the network? I feel like that's a pretty basic thing that servers should do that's been hard to accomplish under Linux. Sure, X11 could do it, but it was awful to push modern screens across the network compared to Windows' RDP.
It is a step in having first class support for such things.
Remote login has missed the feature cut off and should be added for gnome 46.
Triple buffering is going to make things less jittery (smaller frame-time-variance) for some at the expense of potentially increased input-latency, due to the few extra buffered frames. Some may notice the jitter more, others will suffer from the input latency more, and others such as yourself will not see a difference.
Note, the triple buffering patch mentioned previously does NOT add a permanent third scanout buffer and the requisite frame of latency.
It's intent is to dynamically add a buffer when excess GPU wait time causes Gnome's normal double buffer to miss a page flip, particularly in the instance when the GPU is in a low power state. Once the GPU "wakes up" and is back on time, it goes back to double buffering.
Does 'Wayland multi-seat' mean I can build a GNOME 'terminal server' that accelerates graphics locally and shoots the VNC or RDP over the network? I feel like that's a pretty basic thing that servers should do that's been hard to accomplish under Linux. Sure, X11 could do it, but it was awful to push modern screens across the network compared to Windows' RDP.
X11 via network is utter crap compared to any other solution, it has been like this since forever. Whenever there is some latency or slightest bandwidth issues X11 via IP is a shitshow. I still often use X2go for my tasks, which is based on Nomachine NX. They stripped down the X11 protocol, added compression and some other stuff which makes usage a much nicer experience for desktop tasks still using free software. It should also fit your usecase I guess so maybe take a look.
Thats also no really futureproof solution like anything involving X11 but for the time being its less pain.
So what kind of multi seat is this? Zaphod multi head, with multiple separate sessions on a single GPU (what I would want), or only with multiple GPUs (what I expect, with systems integration), or Wayland seats with multiple cursors? Or a combination?
I guess separate login screens for different GPUs or VNC sessions. Sharing the heads of the same GPU requires more effort.
Anything dealing with input latency, frame-time-variance, refresh rates, and screen tearing are going to be HIGHLY subjective. Personally, I'm extremely sensitive to lower refresh rates and input latency. I'm not one of those genetic anomalies with reflexes able to take advantage of a 300hz display, but around 120hz is a noticeably better experience for me on a display with proper gray-to-gray response times.
Triple buffering is going to make things less jittery (smaller frame-time-variance) for some at the expense of potentially increased input-latency, due to the few extra buffered frames. Some may notice the jitter more, others will suffer from the input latency more, and others such as yourself will not see a difference.
Noticing the improvement from 120 Hz over 60 Hz is not subjective, it's a fact. Anyone with normal functioning eyesight would notice it even on smaller displays let alone on a huge screen. I would even call 60 Hz jittery under any conditions, so the benefit from triple buffering would probably be minuscule.
Sorry but I am too lazy, but I would kind of want to know what kind of metrics the triple buffer lobbyists have used for measuring the benefit, because simply expecting to see change usually leads the mind to fool the eye that there's a change even when there isn't it. Proving the benefit needs more than just "I think I saw it so it must be real".
You mean triple buffering? I have a discrete AMD GPU and tried it on Debian 12 which supposedly ships it. I've noticed literally zero difference between having this patch and not having it.
...
Maybe like others were saying, it brings a huge difference only on Intel iGPU's or other low end integrated graphics, or am I missing something?
Anything dealing with input latency, frame-time-variance, refresh rates, and screen tearing are going to be HIGHLY subjective. Personally, I'm extremely sensitive to lower refresh rates and input latency. I'm not one of those genetic anomalies with reflexes able to take advantage of a 300hz display, but around 120hz is a noticeably better experience for me on a display with proper gray-to-gray response times.
Triple buffering is going to make things less jittery (smaller frame-time-variance) for some at the expense of potentially increased input-latency, due to the few extra buffered frames. Some may notice the jitter more, others will suffer from the input latency more, and others such as yourself will not see a difference.
Does 'Wayland multi-seat' mean I can build a GNOME 'terminal server' that accelerates graphics locally and shoots the VNC or RDP over the network? I feel like that's a pretty basic thing that servers should do that's been hard to accomplish under Linux. Sure, X11 could do it, but it was awful to push modern screens across the network compared to Windows' RDP.
Leave a comment: