Announcement

Collapse
No announcement yet.

Wayland's Weston Gets A Remoting Plugin For Virtual Output Streaming

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • StefanBruens
    replied
    Originally posted by kreijack View Post

    I don't think that RDP compute the delta on the basis of the old content: the comparing of new vs old would be very inefficient. I.e. X11 has an extension (XDamage) to track the updated regions in order to compute the delta. I suppose that even the other OS have API like that.
    In the X11 protocol there is the notion of clipping area: the data transmitted is already the changed one only.
    Quoting from https://docs.microsoft.com/en-us/win...sktop-protocol
    Bandwidth reduction features
    RDP supports various mechanisms to reduce the amount of data transmitted over a network connection. Mechanisms include data compression, persistent caching of bitmaps, and caching of glyphs and fragments in RAM. The persistent bitmap cache can provide a substantial improvement in performance over low-bandwidth connections, especially when running applications that make extensive use of large bitmaps.
    and from https://msdn.microsoft.com/en-us/library/hh880930.aspx, as just one example for delta compression:

    The RemoteFX Progressive Codec extends the RemoteFX Codec ([MS-RDPRFX] sections 2.2.2 and 3.1.8) by adding sub-band diffing and the ability to progressively encode an image. Sub-band diffing is a compression technique that entails transmitting the differences between the DWT coefficients of consecutive frames, while progressive encoding involves the transmission of low-quality images that are gradually refined and improved in quality.

    Since RDP 10, it can use fullscreen h.264/AVC 444 (no chroma subsampling) for all content - h.264 uses intra-frame compression, save the occasional I-frame.

    Anyway I don't think that comparing of X11 and RDP makes any sense; the are two different beast.
    X11 has a lot of concepts that are simply missing (because not needed) in RDP like
    - windows hierarchy
    - buffers and resource ownership of the resources
    - the ability to compose different windows coming from different processes (i.e. transparency)

    RDP is a lot more simple: get an update region, then send it to the wire...
    You are correct in asserting a direct protocol comparison between X11 and RDP is pointless. But as X11 is often claimed to be network transparent (which is true to some degree) and thus is good for remoting (which is barely true in a LAN), it makes sense to compare the remoting capabilities of both.

    Note, RDP (it has more than 30 extensions - https://msdn.microsoft.com/en-us/library/jj712081.aspx, [MS-RDP*]) includes:

    Leave a comment:


  • kreijack
    replied
    Originally posted by StefanBruens View Post

    Intelligence of X11:
    1. draw this rasterized bitmap at position x/y.
    2. have you finished drawing the previous bitmap?
    3. ok, here is the next bitmap (go back to 1., repeat for every changed screen area)
    X11 can do more, but nobody is interested in drawing solid rectangles and non-antialiased lines and text ...

    RDP can do a lot more (including extra channels for e.g. device/audio/... forwarding, but lets concentrate on the graphics part):[LIST=1][*]it sends changed areas of the screen to the client[LIST=1][*]it uses caching/deltas based on the old content
    I don't think that RDP compute the delta on the basis of the old content: the comparing of new vs old would be very inefficient. I.e. X11 has an extension (XDamage) to track the updated regions in order to compute the delta. I suppose that even the other OS have API like that.
    In the X11 protocol there is the notion of clipping area: the data transmitted is already the changed one only.

    Anyway I don't think that comparing of X11 and RDP makes any sense; the are two different beast.
    X11 has a lot of concepts that are simply missing (because not needed) in RDP like
    - windows hierarchy
    - buffers and resource ownership of the resources
    - the ability to compose different windows coming from different processes (i.e. transparency)

    RDP is a lot more simple: get an update region, then send it to the wire...


    Leave a comment:


  • starshipeleven
    replied
    Originally posted by kpedersen View Post
    That is a little bit of a naïve view.
    It is the only way forward. You won't convince anyone with "I need a "smart protocol" so I can have old crap run for software preservation purposes". For new stuff you need to push true solutions, not desperate hacks.

    VirtualBox and co will not support Windows XP (and the virtual GPU driver) for long once the remaining users on that platform dwindle. So then it will be Windows Vista and above we can correctly emulate. This is an invalid solution for digital preservation.
    Sorry what? The issue with Win2000 and before is that no such driver was ever made, not that they dropped support.
    I don't see why they should drop support, the driver won't need any mainteneance as it is running in an unmaintained OS, they already have to keep their host-guest interfaces retrocompatible anyway.

    They can drop host support for XP, sure, but that's not relevant.

    Also, I don't think it is a hack; RDP is one of the principle ways of handling things like USB passthrough and larger resolutions in Hyper-V.
    Does not make it any less of a hack.

    As it stands, it is also the only way to utilize very old platforms such as Windows NT 4.0 (Hydra) and even older (Winframe). You might not currently care about maintaining older platforms but that doesn't mean that we should just let them become inaccessible.
    That's because at the times they were supposed to fade into obscurity when obsolete. And guess what? They are. So you can only try to hack and workaround.

    With newer OSes this is becoming less and less of an issue due to virtualization becoming so pervasive, and also due to different software development paradigms.

    For example an Android application, or a Windows Store application will not be locked to a specific OS version and will keep working in the future too.
    Or VmWare 3D having DX10 support.

    Leave a comment:


  • rewik
    replied
    Originally posted by jpg44 View Post
    An X application which is using the GLX protocol extension indeed does not have any contact with video hardware, it kind of does provide an extra layer of security. With Video Hardware in the application, you trust the GPU will be bug free and will carefully provide access controls for what an app can do. [...]
    Incorrect.

    Originally posted by jpg44 View Post
    [...] What i gather is that wayland applications have a DRI video driver in them and that OpenGL commands are sent to the video driver and then to video hardware directly from the app, painted to a video buffer in GPU.[...]
    Partially incorrect.

    Originally posted by jpg44 View Post
    [...] The Wayland Display Server composites all of the buffers together in the GPU. [...]
    Partially incorrect.

    This will be a bit of a rant, so I'll summarize it here: Wayland does NOT require OpenGL. Wayland does NOT require ANY form of hardware-acceleration. Wayland has been run in pure framebuffer environment (see https://tecnocode.co.uk/2013/02/18/w...uffer-backend/ ). Any application running on X has the same level of access to hardware as does an application running on Wayland.

    To begin with, let's get some things straight. We have OpenGL, OpenGL ES, EGL, XGL and some other APIs being mentioned here so it's good to know which one does what.

    Wayland.
    It's a protocol. Nothing more. It defines the way the application communicates with the compositioning server. The reference server is Weston. Another server is the Gnome Shell, etc.

    OpenGL and OpenGL ES.
    Those are used for drawing graphics. OpenGL is mostly used on desktop systems and OpenGL ES is mostly used on embedded devices (or Android). They draw graphics somewhere. It might be directly on the hardwares output buffer or somewhere in memory. Doesn't matter. However, when designing those several important aspects were left out. Namely, if you want to draw graphics you need to have an output defined. It needs to know what's the resoultion, what's the graphics format (for example what's the bit depth of the colour red). They do NOT deal with that. Another aspect they do NOT deal with is the concept of swapping buffers. Since we don't draw directly to the output anymore, drawing is done to a buffer. Then once the drawing is done, the program signals that the buffer can be drawn and in the meantime it will draw to another buffer. They do not deal with that. That's where our other set of APIs come in.

    GLX, EGL, WGL, AGL, CGL.
    Those are used to create output buffers (and some other stuff), signal that the drawing on the buffer has been finished and it can be now displayed/processed, etc. Each is tailored to a specific OS (except EGL), and each CAN create an OpenGL context (okay, EGL doesn't always allow to do that). GLX is used for X11, WGL for Windows, AGL and CGL are used on macOS and EGL was designed for use on embedded devices. I'm not sure about the others, but EGL can be used WITHOUT OpenGL or OpenGL ES. It can create and manage a purely software buffer. However, if you want to use OpenGL in Wayland, you HAVE to use EGL - just as in X11 if you want to use OpenGL you HAVE to use GLX (or EGL apparently... sometimes... things can get complicated once you dig for details).
    GLX is a bit special here, as it does enable drawing through it rather than going to the hardware directly, however since the introduction of DRI that's pretty much unused and whichever application can get a direct access, does so.

    As far as I know, Wayland does not require neither OpenGL nor EGL. (I might be wrong on the EGL part). The only requirement Wayland has is that the finished buffer with the drawn application can be shared between the application (which does the drawing however it wants) and the compositor (which uses whatever means it wants to reposition it and finally display it on the monitor).
    However, if an application wants to use hardware acceleration (and the hardware supports it), EGL is the way to go on the application side. Use that to create the buffers, then pass the buffers on to the compositioning server and it will most likely use OpenGL ES to transform those as necessary and display them using whatever hardware is appropriate.

    As always, I urge the people who don't understand why is Wayland being introduced to watch a video from Linux.conf.au 2013 explaining the reasoning behind it. It's done by Daniel Stone, an X11 developer ( https://cgit.freedesktop.org/xorg/xs...q=Daniel+Stone ) and it's 45 minutes of awesome.
    Presenter(s): Daniel StoneURL: http://lca2013.linux.org.au/schedule/30256/view_talk(Or, 'Why Everything You've Read in LWN and Phoronix Comments is Untrue'.)...

    Leave a comment:


  • kpedersen
    replied
    Originally posted by starshipeleven View Post

    We need to have decent hardware emulation, and guess what, for everything after XP we are covered, a stupid hack is not required for preservation of XP and later software.
    That is a little bit of a naïve view. VirtualBox and co will not support Windows XP (and the virtual GPU driver) for long once the remaining users on that platform dwindle. So then it will be Windows Vista and above we can correctly emulate. This is an invalid solution for digital preservation.

    Also, I don't think it is a hack; RDP is one of the principle ways of handling things like USB passthrough and larger resolutions in Hyper-V.

    As it stands, it is also the only way to utilize very old platforms such as Windows NT 4.0 (Hydra) and even older (Winframe). You might not currently care about maintaining older platforms but that doesn't mean that we should just let them become inaccessible.


    Leave a comment:


  • starshipeleven
    replied
    Originally posted by kpedersen View Post
    You will be unpleasantly surprised if you try to run Windows 2000 on any of these platforms, there is very minimal GPU support (if any). The RDP "hack" is your best bet. Try it.
    I don't care, a hack is a hack and does not justify making "intelligent" (actually "render on client", which isn't "more intelligent" per-se) protocols.

    Also, are you personally going to fix QEMU's GPU emulation for old platforms?
    No, but asking people to make "render on client" protocols so you can perpetrate the same hack in the future isn't an acceptable solution. The solution is fixing the problem, if it can't be done for old software now, it has to be done for the current software so when it becomes old we are covered. And guess what, this is being done.

    We need to work with what we have by planning ahead with an "intelligent" protocol.
    We need to have decent hardware emulation, and guess what, for everything after XP we are covered, a stupid hack is not required for preservation of XP and later software.

    Leave a comment:


  • patrakov
    replied
    Originally posted by kpedersen View Post
    For example, run Windows 2000 fully emulated on Qemu. Using the standard emulated GPU is slow. Too slow to even really do office related tasks. However if you connect to it via RDP (in Terminal Services Edition) then it is much, much faster. Just as fast as a non emulated host in many cases. This is the part that is important to me for Digital Preservation.
    As far as I remember, it is the last OS to have no VESA driver, and so has to rely on Cirrus VGA, with the ugly 24 <-> 32 bpp conversion. Could you please try running it with the third-party VESA driver instead? https://bearwindows.zcm.com.au/vbemp.htm

    Leave a comment:


  • kpedersen
    replied
    Originally posted by starshipeleven View Post

    WTF of ugly hack is this? Fix the GPU emulation in QEMU or use VirtualBox or VMWare.
    You will be unpleasantly surprised if you try to run Windows 2000 on any of these platforms, there is very minimal GPU support (if any). The RDP "hack" is your best bet. Try it.

    Also, are you personally going to fix QEMU's GPU emulation for old platforms? No. Neither is anyone else because there is no money in it. We need to work with what we have by planning ahead with an "intelligent" protocol.

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by M@yeulC View Post
    I'm not sure about a Windows2000 VM, would that OS use GPU-accelerated rendering in the first place?
    Not the OS for sure, but games in Win2000 will require a GPU.

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by kpedersen View Post
    Most game streaming services shutdown before long so I think that is a great testament for needing a more intelligent graphics protocol.
    Nonsense. There is a reason if the CPU and GPU communicate on PCIe, which is a ridicolously fast and low-latency bus. Any attempt to stretch that over a network (which has orders of magnitude less bandwith, while also many orders of magnitude more latency) is completely retarded, they are completely opposite environments.

    Its not really about speed, it is more about feasibility. Emulators and some servers don't have a GPU capable of rendering 3D. Intelligent protocols mean that the client can do the rendering, leaving the host to remain simple (and not need a GPU).

    For example, run Windows 2000 fully emulated on Qemu. Using the standard emulated GPU is slow. Too slow to even really do office related tasks. However if you connect to it via RDP (in Terminal Services Edition) then it is much, much faster. Just as fast as a non emulated host in many cases. This is the part that is important to me for Digital Preservation.
    WTF of ugly hack is this? Fix the GPU emulation in QEMU or use VirtualBox or VMWare.

    Leave a comment:

Working...
X