Originally posted by mSparks
View Post
In both cases the neither application nor display server are aware of them not talking to each other directly but via proxy.
In both cases the tunnel end points work with a local unix domain socket: on the application side the tunnel pretends to be the display server by creating the unix domain socket. On the display server side the tunnel pretends to be the application by connecting to the display server's socket.
All messages that do not have associated file descriptors can be passed through either tunnel unchanged.
The main difference of the two tunnels is how the deal with messages that do have those.
The X11 tunnel filters all extensions that would require such messages, letting the application deal with falling back to data serialization.
The Wayland tunnel accepts the file descriptor, serializes the data across the network, and recreates a compatible file descriptor on the other side.
Originally posted by mSparks
View Post
It is, after all, the system's interface for such render buffers.
Originally posted by mSparks
View Post
When the application and display server communicate via a unix domain socket, they can pass handles to buffers instead of the data inside the buffers.
These handles are unix file descriptors, so just 32-bit values, i.e. 4 bytes.
A 4k image with 4 color channels of 8 bit each has around 24 MBs.
It is much faster to transmit 4 bytes handles than 6 million times as much image data.
An optimization X11 has employed for decades via various extensions and which became a core concept of the Wayland specification.
Comment