Announcement

Collapse
No announcement yet.

Wayland Network Transparency Patches Published

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • timofonic
    replied
    Originally posted by Zan Lynx View Post
    Pushing the protocol over TCP/IP seems like the wrong idea. I'm pretty sure the Wayland developers agree.


    Originally posted by microcode View Post
    This would mean that some GL calls would need to block for a round trip or maybe even more. I don't see this working out well unless there's also a local GL state machine; even then you still would need to send the textures and geometry and everything. This just sounds like a new kind of hell.
    Why?

    If Wayland and Linux wants to survive, I think there must they must implement distributed/decentralized computing:

    People have machines everywhere.
    - It could be very interesting to be able to run the same app and able to use the best available resources, plus certain configurability.
    - What about caching the data? This way, the usual stream of data would be a lot smaller.


    Originally posted by jabl View Post
    This is, AFAIU, the approach used by X (AIGLX etc.), but it doesn't work that well. You're replacing an extremely high bandwidth and low latency PCIe link to the GPU with a network, meaning that you'll ship the GPU command stream, textures etc. over the net. And as applications are developed with the expectation of a very fast connection to a local GPU, the user experience will suck.

    Better, and simpler, to do the rendering on the remote end and just ship a (compressed) stream of bitmaps over the network. Which is how RDP, SPICE, VNC, and AFAICS, the Wayland remoting, works.
    Bitmaps? What about vectors too? What about videos? Uhm, I see it dirty :/

    Originally posted by philcostin View Post
    Surely the most sensible thing to do would be to specify an encapsulation format, standardized by freedesktop.org, which can be used by the likes of Cairo and Qt for sending compressed draw calls across the network between wayland compositors. So, not something directly related to wayland per se, but a means for transmitting the draw calls themselves over a common pipe. That might include images (worst case) - but normally not.


    An interface to this protocol would be implemented by Cairo and by Qt for both drawing out to, and receiving draw calls from the pipe. The compositor running on the machine displaying the window would then call upon the specific module (Qt / Cairo / etc) to draw locally into the buffer.

    This would reduce network overhead for the common cases - but there would still be the issue of version mismatch of Qt and Cairo on server / client machines.
    I think the actual plan for network transparency was always to have the Wayland compositor handle it and have nothing to do with the Wayland protocol. Use VNC or a MP4 / HEVC full screen video encode.
    About another app/subsystem and not make it part of Wayland family: No, please. Xorg has always been annoying, tnis would make it worse.

    - Despite SSH is lighter: An independent guy outside these big foundations did a great project named MOSH, it's tons more resistent to awfully bad connections, reconnecting when get disconnected or changing network connection . Because he doesn't have a big brand and maybe require some official polishing, it's not officially integrated.
    * Another not tightly integrated part that will make Xorg become the best idea? I don't agree.

    I- t's okay some things need to be improved in certain projects, but I think sending "photos" is a thing of the past. It's funny how graphics card do remote videogame playing over Steam and such. Nvidia does something different, not sure what.

    I see it the contrary. If Wayland doesn't work over TCP/IP, improve it.

    Leave a comment:


  • philcostin
    replied
    Surely the most sensible thing to do would be to specify an encapsulation format, standardized by freedesktop.org, which can be used by the likes of Cairo and Qt for sending compressed draw calls across the network between wayland compositors. So, not something directly related to wayland per se, but a means for transmitting the draw calls themselves over a common pipe. That might include images (worst case) - but normally not.


    An interface to this protocol would be implemented by Cairo and by Qt for both drawing out to, and receiving draw calls from the pipe. The compositor running on the machine displaying the window would then call upon the specific module (Qt / Cairo / etc) to draw locally into the buffer.

    This would reduce network overhead for the common cases - but there would still be the issue of version mismatch of Qt and Cairo on server / client machines.
    Last edited by philcostin; 10 February 2016, 08:26 PM.

    Leave a comment:


  • Zan Lynx
    replied
    Pushing the protocol over TCP/IP seems like the wrong idea. I'm pretty sure the Wayland developers agree.

    I think the actual plan for network transparency was always to have the Wayland compositor handle it and have nothing to do with the Wayland protocol. Use VNC or a MP4 / HEVC full screen video encode.

    Leave a comment:


  • microcode
    replied
    Originally posted by newwen View Post

    I think this is currently impossible, as the wayland client will render using its local GPU (if any) and then send the buffer containing the bitmat to the wayland server (your machine). I don't know if it would be possible that with some EGL magic the client could render using the wayland server machine GPU (your machine)

    This would mean that some GL calls would need to block for a round trip or maybe even more. I don't see this working out well unless there's also a local GL state machine; even then you still would need to send the textures and geometry and everything. This just sounds like a new kind of hell.

    Leave a comment:


  • newwen
    replied
    Originally posted by timofonic View Post
    Four scenarios I'm sure they'll consider:

    - Powerful GPU client (sorry, I get messed by X naming things backwards): You connect to the server and want to do the rendering in your machine, because your GPU is powerful enough or quite powerful.
    I think this is currently impossible, as the wayland client will render using its local GPU (if any) and then send the buffer containing the bitmat to the wayland server (your machine). I don't know if it would be possible that with some EGL magic the client could render using the wayland server machine GPU (your machine)
    Last edited by newwen; 10 February 2016, 06:08 AM.

    Leave a comment:


  • jabl
    replied
    Originally posted by timofonic View Post
    Four scenarios I'm sure they'll consider:

    - Powerful GPU client (sorry, I get messed by X naming things backwards): You connect to the server and want to do the rendering in your machine, because your GPU is powerful enough or quite powerful.
    This is, AFAIU, the approach used by X (AIGLX etc.), but it doesn't work that well. You're replacing an extremely high bandwidth and low latency PCIe link to the GPU with a network, meaning that you'll ship the GPU command stream, textures etc. over the net. And as applications are developed with the expectation of a very fast connection to a local GPU, the user experience will suck.

    Better, and simpler, to do the rendering on the remote end and just ship a (compressed) stream of bitmaps over the network. Which is how RDP, SPICE, VNC, and AFAICS, the Wayland remoting, works.

    Leave a comment:


  • microcode
    replied
    Originally posted by [email protected] View Post
    I hope they at least make it optional, I was always told that this would not be part of Wayland's core features, and would like to be able to strip out this feature where not needed without breaking things.

    This doesn't involve any additional protocol, so it's entirely the option of the compositor not to support it.

    Leave a comment:


  • M@yeulC
    replied
    I hope they at least make it optional, I was always told that this would not be part of Wayland's core features, and would like to be able to strip out this feature where not needed without breaking things.

    Leave a comment:


  • timofonic
    replied
    Four scenarios I'm sure they'll consider:

    - Powerful GPU client (sorry, I get messed by X naming things backwards): You connect to the server and want to do the rendering in your machine, because your GPU is powerful enough or quite powerful.
    - Weak GPU client (thin client): You have a big machine with a powerful GPU or a N number of high-end GPUs, you want to make the 3D rendering on the server and then translate the results to the client.
    *Useful for:
    ** Graphics demanding apps: Games, CAD/CAM, 3D modelling, etc.
    - Mixed approach: You want to get the best of both GPUs but without making performance worse, considering stuff like latency and bandwidth of the network.
    - Virtual Machines (QEMU/KVM, Xen...): Is it possible to integrate SPICE or provide an equivalent technology?

    This can be a lot more useful than some people may think. It can be used in companies, gaming competitions and even public terminals at places like universities or libraries.

    Of course, using latest technologies and GPU features to make it smoother and efficient would be a big plus.
    Last edited by timofonic; 09 February 2016, 07:22 PM.

    Leave a comment:


  • cynic
    replied
    that's great! give me this plus primary selection and I can throw X out of the window right now (no pun intendend)

    Leave a comment:

Working...
X