Announcement

Collapse
No announcement yet.

Wayland's Weston Gets A Remoting Plugin For Virtual Output Streaming

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by StefanBruens View Post
    You know how game streaming services work? - they render into an offscreen bitmap, compress it and send it to the client.


    Most game streaming services shutdown before long so I think that is a great testament for needing a more intelligent graphics protocol.

    Originally posted by StefanBruens View Post
    Streaming rendered content is about ~10 Mbit/s, streaming the to be rendered data is several GByte/s (if you leave glxgears behind).
    Its not really about speed, it is more about feasibility. Emulators and some servers don't have a GPU capable of rendering 3D. Intelligent protocols mean that the client can do the rendering, leaving the host to remain simple (and not need a GPU).

    For example, run Windows 2000 fully emulated on Qemu. Using the standard emulated GPU is slow. Too slow to even really do office related tasks. However if you connect to it via RDP (in Terminal Services Edition) then it is much, much faster. Just as fast as a non emulated host in many cases. This is the part that is important to me for Digital Preservation.

    Comment


    • #12
      Originally posted by kpedersen View Post

      Most game streaming services shutdown before long so I think that is a great testament for needing a more intelligent graphics protocol.[/SIZE][/FONT][/COLOR][/LEFT]

      [LEFT][COLOR=#252C2F][FONT="Segoe UI"][SIZE=14px]
      Its not really about speed, it is more about feasibility. Emulators and some servers don't have a GPU capable of rendering 3D. Intelligent protocols mean that the client can do the rendering, leaving the host to remain simple (and not need a GPU).

      For example, run Windows 2000 fully emulated on Qemu. Using the standard emulated GPU is slow. Too slow to even really do office related tasks. However if you connect to it via RDP (in Terminal Services Edition) then it is much, much faster. Just as fast as a non emulated host in many cases. This is the part that is important to me for Digital Preservation.
      If the host isn't more powerful than the client, though, what is the benefit of launching a program on it? Wouldn't you be better off just forwarding a couple of the thing that are needed (d-bus? filesystem? network? other?) and running the application on the client instead?
      I feel like something like X forwarding would only be useful in 0.5-1.0% of the total use cases (could grow a bit more if it becomes more useful). Yet the need to use the client's GPU in such cases would be like 0.1% of that percentage.

      Not to say it wouldn't be useful in some cases... And isn't it what VirGL provides already? I'm not sure about a Windows2000 VM, would that OS use GPU-accelerated rendering in the first place?

      Comment


      • #13
        Originally posted by kpedersen View Post
        Most game streaming services shutdown before long so I think that is a great testament for needing a more intelligent graphics protocol.
        Nonsense. There is a reason if the CPU and GPU communicate on PCIe, which is a ridicolously fast and low-latency bus. Any attempt to stretch that over a network (which has orders of magnitude less bandwith, while also many orders of magnitude more latency) is completely retarded, they are completely opposite environments.

        Its not really about speed, it is more about feasibility. Emulators and some servers don't have a GPU capable of rendering 3D. Intelligent protocols mean that the client can do the rendering, leaving the host to remain simple (and not need a GPU).

        For example, run Windows 2000 fully emulated on Qemu. Using the standard emulated GPU is slow. Too slow to even really do office related tasks. However if you connect to it via RDP (in Terminal Services Edition) then it is much, much faster. Just as fast as a non emulated host in many cases. This is the part that is important to me for Digital Preservation.
        WTF of ugly hack is this? Fix the GPU emulation in QEMU or use VirtualBox or VMWare.

        Comment


        • #14
          Originally posted by [email protected] View Post
          I'm not sure about a Windows2000 VM, would that OS use GPU-accelerated rendering in the first place?
          Not the OS for sure, but games in Win2000 will require a GPU.

          Comment


          • #15
            Originally posted by starshipeleven View Post

            WTF of ugly hack is this? Fix the GPU emulation in QEMU or use VirtualBox or VMWare.
            You will be unpleasantly surprised if you try to run Windows 2000 on any of these platforms, there is very minimal GPU support (if any). The RDP "hack" is your best bet. Try it.

            Also, are you personally going to fix QEMU's GPU emulation for old platforms? No. Neither is anyone else because there is no money in it. We need to work with what we have by planning ahead with an "intelligent" protocol.

            Comment


            • #16
              Originally posted by kpedersen View Post
              For example, run Windows 2000 fully emulated on Qemu. Using the standard emulated GPU is slow. Too slow to even really do office related tasks. However if you connect to it via RDP (in Terminal Services Edition) then it is much, much faster. Just as fast as a non emulated host in many cases. This is the part that is important to me for Digital Preservation.
              As far as I remember, it is the last OS to have no VESA driver, and so has to rely on Cirrus VGA, with the ugly 24 <-> 32 bpp conversion. Could you please try running it with the third-party VESA driver instead? https://bearwindows.zcm.com.au/vbemp.htm

              Comment


              • #17
                Originally posted by kpedersen View Post
                You will be unpleasantly surprised if you try to run Windows 2000 on any of these platforms, there is very minimal GPU support (if any). The RDP "hack" is your best bet. Try it.
                I don't care, a hack is a hack and does not justify making "intelligent" (actually "render on client", which isn't "more intelligent" per-se) protocols.

                Also, are you personally going to fix QEMU's GPU emulation for old platforms?
                No, but asking people to make "render on client" protocols so you can perpetrate the same hack in the future isn't an acceptable solution. The solution is fixing the problem, if it can't be done for old software now, it has to be done for the current software so when it becomes old we are covered. And guess what, this is being done.

                We need to work with what we have by planning ahead with an "intelligent" protocol.
                We need to have decent hardware emulation, and guess what, for everything after XP we are covered, a stupid hack is not required for preservation of XP and later software.

                Comment


                • #18
                  Originally posted by starshipeleven View Post

                  We need to have decent hardware emulation, and guess what, for everything after XP we are covered, a stupid hack is not required for preservation of XP and later software.
                  That is a little bit of a naïve view. VirtualBox and co will not support Windows XP (and the virtual GPU driver) for long once the remaining users on that platform dwindle. So then it will be Windows Vista and above we can correctly emulate. This is an invalid solution for digital preservation.

                  Also, I don't think it is a hack; RDP is one of the principle ways of handling things like USB passthrough and larger resolutions in Hyper-V.

                  As it stands, it is also the only way to utilize very old platforms such as Windows NT 4.0 (Hydra) and even older (Winframe). You might not currently care about maintaining older platforms but that doesn't mean that we should just let them become inaccessible.


                  Comment


                  • #19
                    Originally posted by jpg44 View Post
                    An X application which is using the GLX protocol extension indeed does not have any contact with video hardware, it kind of does provide an extra layer of security. With Video Hardware in the application, you trust the GPU will be bug free and will carefully provide access controls for what an app can do. [...]
                    Incorrect.

                    Originally posted by jpg44 View Post
                    [...] What i gather is that wayland applications have a DRI video driver in them and that OpenGL commands are sent to the video driver and then to video hardware directly from the app, painted to a video buffer in GPU.[...]
                    Partially incorrect.

                    Originally posted by jpg44 View Post
                    [...] The Wayland Display Server composites all of the buffers together in the GPU. [...]
                    Partially incorrect.

                    This will be a bit of a rant, so I'll summarize it here: Wayland does NOT require OpenGL. Wayland does NOT require ANY form of hardware-acceleration. Wayland has been run in pure framebuffer environment (see https://tecnocode.co.uk/2013/02/18/w...uffer-backend/ ). Any application running on X has the same level of access to hardware as does an application running on Wayland.

                    To begin with, let's get some things straight. We have OpenGL, OpenGL ES, EGL, XGL and some other APIs being mentioned here so it's good to know which one does what.

                    Wayland.
                    It's a protocol. Nothing more. It defines the way the application communicates with the compositioning server. The reference server is Weston. Another server is the Gnome Shell, etc.

                    OpenGL and OpenGL ES.
                    Those are used for drawing graphics. OpenGL is mostly used on desktop systems and OpenGL ES is mostly used on embedded devices (or Android). They draw graphics somewhere. It might be directly on the hardwares output buffer or somewhere in memory. Doesn't matter. However, when designing those several important aspects were left out. Namely, if you want to draw graphics you need to have an output defined. It needs to know what's the resoultion, what's the graphics format (for example what's the bit depth of the colour red). They do NOT deal with that. Another aspect they do NOT deal with is the concept of swapping buffers. Since we don't draw directly to the output anymore, drawing is done to a buffer. Then once the drawing is done, the program signals that the buffer can be drawn and in the meantime it will draw to another buffer. They do not deal with that. That's where our other set of APIs come in.

                    GLX, EGL, WGL, AGL, CGL.
                    Those are used to create output buffers (and some other stuff), signal that the drawing on the buffer has been finished and it can be now displayed/processed, etc. Each is tailored to a specific OS (except EGL), and each CAN create an OpenGL context (okay, EGL doesn't always allow to do that). GLX is used for X11, WGL for Windows, AGL and CGL are used on macOS and EGL was designed for use on embedded devices. I'm not sure about the others, but EGL can be used WITHOUT OpenGL or OpenGL ES. It can create and manage a purely software buffer. However, if you want to use OpenGL in Wayland, you HAVE to use EGL - just as in X11 if you want to use OpenGL you HAVE to use GLX (or EGL apparently... sometimes... things can get complicated once you dig for details).
                    GLX is a bit special here, as it does enable drawing through it rather than going to the hardware directly, however since the introduction of DRI that's pretty much unused and whichever application can get a direct access, does so.

                    As far as I know, Wayland does not require neither OpenGL nor EGL. (I might be wrong on the EGL part). The only requirement Wayland has is that the finished buffer with the drawn application can be shared between the application (which does the drawing however it wants) and the compositor (which uses whatever means it wants to reposition it and finally display it on the monitor).
                    However, if an application wants to use hardware acceleration (and the hardware supports it), EGL is the way to go on the application side. Use that to create the buffers, then pass the buffers on to the compositioning server and it will most likely use OpenGL ES to transform those as necessary and display them using whatever hardware is appropriate.

                    As always, I urge the people who don't understand why is Wayland being introduced to watch a video from Linux.conf.au 2013 explaining the reasoning behind it. It's done by Daniel Stone, an X11 developer ( https://cgit.freedesktop.org/xorg/xs...q=Daniel+Stone ) and it's 45 minutes of awesome.
                    https://www.youtube.com/watch?v=GWQh_DmDLKQ

                    Comment


                    • #20
                      Originally posted by kpedersen View Post
                      That is a little bit of a naïve view.
                      It is the only way forward. You won't convince anyone with "I need a "smart protocol" so I can have old crap run for software preservation purposes". For new stuff you need to push true solutions, not desperate hacks.

                      VirtualBox and co will not support Windows XP (and the virtual GPU driver) for long once the remaining users on that platform dwindle. So then it will be Windows Vista and above we can correctly emulate. This is an invalid solution for digital preservation.
                      Sorry what? The issue with Win2000 and before is that no such driver was ever made, not that they dropped support.
                      I don't see why they should drop support, the driver won't need any mainteneance as it is running in an unmaintained OS, they already have to keep their host-guest interfaces retrocompatible anyway.

                      They can drop host support for XP, sure, but that's not relevant.

                      Also, I don't think it is a hack; RDP is one of the principle ways of handling things like USB passthrough and larger resolutions in Hyper-V.
                      Does not make it any less of a hack.

                      As it stands, it is also the only way to utilize very old platforms such as Windows NT 4.0 (Hydra) and even older (Winframe). You might not currently care about maintaining older platforms but that doesn't mean that we should just let them become inaccessible.
                      That's because at the times they were supposed to fade into obscurity when obsolete. And guess what? They are. So you can only try to hack and workaround.

                      With newer OSes this is becoming less and less of an issue due to virtualization becoming so pervasive, and also due to different software development paradigms.

                      For example an Android application, or a Windows Store application will not be locked to a specific OS version and will keep working in the future too.
                      Or VmWare 3D having DX10 support.

                      Comment

                      Working...
                      X