Announcement

Collapse
No announcement yet.

Waypipe Offers A Transparent Wayland Proxy For Running Programs Over The Network

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by oiaohm View Post


    VNC has things like https://tigervnc.org/ that allows you to use opengl applications and the like remotely. This stuff does not work well with X11 at all. Also losing your connection VNC server on the computer you are connecting to can nicely stay running even than ssh was cut. X.org server by self this is not the case connection cut termination of programs.

    This is not true. X for years has had GLX which can send OpenGL commands over the wire so that OpenGL can be rendering on the server side. This allows hardware acceleration to be done even if an application is running on a different computer from where it is to be displayed. Its not the fault of the X protocol that the developers of X.org refuse to properly implement or support GLX and to keep it updated to the most recent OpenGL spec.

    Comment


    • #42
      There also should be a feature like Xpra which allows a wayland application to be switched and moved between Wayland servers and keeps the wayland application running should it be disconnected from the Wayland server.

      Comment


      • #43
        Originally posted by jpg44 View Post
        This is not true. X for years has had GLX which can send OpenGL commands over the wire so that OpenGL can be rendering on the server side. This allows hardware acceleration to be done even if an application is running on a different computer from where it is to be displayed. Its not the fault of the X protocol that the developers of X.org refuse to properly implement or support GLX and to keep it updated to the most recent OpenGL spec.

        Just because you can do something does not mean it works. Virtualgl was creased when Xfree86 was dominate and people tried doing what you said. The reality is opengl for anything bar the most basic applications require too much network bandwidth to send it commands over network. A program like firefox rendering a webpage can in fact out strip a 10Gbps network connection.

        GLX over network is a feature that does not work. Also Nvidia was the first to make drivers that just flat out refused to work over network.

        Reality opengl really has to be rendered where the application is and turned into a frames and those frames then sent over network or you will eat yourself out of network bandwidth. Yes those frames could be a complete desktop or just an application window. Wayland protocol is design so that applications cannot universally see the desktop one of the reasons it to make implementing this simpler.

        Comment


        • #44
          Originally posted by jpg44 View Post

          Having app<->server network transparency is extremely useful and covers different use cases than whole desktop network tranparency.

          VNC exports the whole desktop and does so by scraping the video front buffer for the entire desktop session and sending the entire bitmap to the client. This is actually somewhat inefficient. For a headless/remote only session it would require software rendering be used. Plus does not cover the use case of where you would want only single application displayed to another computer, not the entire desktop session.

          An ideal app<->server network transparency would work by the app sending OpenGL and Vulkan commands over the wire and for rendering and rasterization of those commands to happen on the computer the app is to be displayed to. This way hardware rendering can be done on the computer where the application is to be displayed to, and it also can avoid sending large video buffers for the entire window over the wire, instead, more vector data can be sent. You could also have a bitmap window buffer mode as well if you want software rendering to be done on the clients machine, but which to use should be made a runtime option. This can be useful if you have applications running on different computers but you want them all displayed to a single display on a different computer.
          I don't think this is the most robust approach, as you might want to render on the server machine if it's more powerful. This could enable true thin clients where heavy work is done on the server rather than the client machine.

          Comment


          • #45
            Originally posted by DoMiNeLa10 View Post
            I don't think this is the most robust approach, as you might want to render on the server machine if it's more powerful. This could enable true thin clients where heavy work is done on the server rather than the client machine.
            Its not might want to render on the server machine. Without question you do want to render on the server machine.
            An ideal app<->server network transparency would work by the app sending OpenGL and Vulkan commands over the wire and for rendering
            This is theory that is disproved as soon as you do GLX with Xfree86/x.org.


            VirtualGL shows that only very basic applications are required to completely consume a 1G network connection.

            Lets put some real numbers in here. pcie 3.0 x8 slot what is fairly much the min to turn heavy graphics over well. 7880MB/s now lets convert this to network bandwidth that is *8. 63040Mbps. Or ~63Gbps without overhead. Really heavy game can be pushing pcie 3.0 x16 126Gbps without overhead. Really who is ssh between computers on either 100Gbps connections or 200Gbps connections this is what you are talking about with the idea of Opengl and Vulkan commands over the wire.

            Lets just say Opengl and Vulkan commands require a monstrous amount of bandwidth in lots of cases. Lets compare this to just sending the images for the windows instead 100Mbps to 1Gbps with compression will work.

            Sending the damaged area bitmaps make more sense than sending Opengl or Vulkan. Opengl and Vulkan commands as these have never been design to cross networking well and are designed for local with massive bandwidth.

            10Gbps you should be able to do really decent image rendering solutions.

            Comment


            • #46
              Originally posted by oiaohm View Post

              Its not might want to render on the server machine. Without question you do want to render on the server machine.
              An ideal app<->server network transparency would work by the app sending OpenGL and Vulkan commands over the wire and for rendering
              This is theory that is disproved as soon as you do GLX with Xfree86/x.org.


              VirtualGL shows that only very basic applications are required to completely consume a 1G network connection.

              Lets put some real numbers in here. pcie 3.0 x8 slot what is fairly much the min to turn heavy graphics over well. 7880MB/s now lets convert this to network bandwidth that is *8. 63040Mbps. Or ~63Gbps without overhead. Really heavy game can be pushing pcie 3.0 x16 126Gbps without overhead. Really who is ssh between computers on either 100Gbps connections or 200Gbps connections this is what you are talking about with the idea of Opengl and Vulkan commands over the wire.

              Lets just say Opengl and Vulkan commands require a monstrous amount of bandwidth in lots of cases. Lets compare this to just sending the images for the windows instead 100Mbps to 1Gbps with compression will work.

              Sending the damaged area bitmaps make more sense than sending Opengl or Vulkan. Opengl and Vulkan commands as these have never been design to cross networking well and are designed for local with massive bandwidth.

              10Gbps you should be able to do really decent image rendering solutions.
              I think this is slightly more complex, as games are designed with the assumption that there's plenty of bandwidth between the CPU and the GPU. Games like Quake 3 Arena squeezed as much as they could from slow connections of their times, especially because there were no VBOs back in the day. If sending graphics comands over (slow) network links was more common, the protocols and software using them would look much different. I feel this is a sort of chicken and egg situation.

              Comment


              • #47
                Originally posted by DoMiNeLa10 View Post
                I think this is slightly more complex, as games are designed with the assumption that there's plenty of bandwidth between the CPU and the GPU. Games like Quake 3 Arena squeezed as much as they could from slow connections of their times, especially because there were no VBOs back in the day. If sending graphics comands over (slow) network links was more common, the protocols and software using them would look much different. I feel this is a sort of chicken and egg situation.
                Even in Quake 3 day you have AGP(1997) at 2133 MB/s and pci 32 bit 266 MB/s yes both of these are AGP multi out 17064Mbps so over 10Gbps and Pci 32 bit over 2128Mbps or just over 2Gbps and this is 1995. Quake 3 is released 1999.

                Historic Vesa Local Bus before then at its slowest clock works out to 200Mbps and it faster 640Mbps this is 1993.

                Basically you are all the way back as ISA 8Mbps for 8 bit version and 16Mbps for 16 bit version. In ID developer made games you are looking at Wolfenstein dos version and the dos versions of doom.

                Basically back in 1987 when X11 was made and 1992 Opengl first version 1.0 release date doing stuff over the network made some kind of sense.

                GLX was first implemented in 1992 with opengl version 1.0. So it made sense then to go over network. Yes by opengl version 1.1 GLX over network no longer made sense and it would not work..

                Its been over 2 decades since it made sense send 3d graphics instructions over network.

                The assumption of tones of bandwidth is not a new one.

                For a long time X11 worked over network because applications were using X11 2d rendering that had been in fact designed to go over the network problem is most modern day applications like firefox, gimp, libreoffice.... are in fact using opengl and expecting tons of bandwidth.

                Libreoffice online is in fact using a image based system to reduce bandwidth so render on server send image update across network. If you want to look at something designed to deliver 3d graphics over slow network connections all you have to look at is webgl.

                Yes you would basically have to bundle up the opengl commands into running scripts and send those scripts across the network this is not how the opengl protocol is designed to work. You would fairly much end up writing you program like a webgl scripting engine in front of the user with a server back-end.

                Stuff like RDP, VNC, VirtualGL are in fact design to work on the sub 100Mbps network connection. Even your digital monitor updates are base on 2D not 3D.

                With large screens we are starting to push another limit where the speed need to-do 2d is getting insane. The recent displayport 2.0 77.57 Gbit/s and all for is 2d image based updating. You don't really want to consider how much 3d data you have to throw around to fill that and then attempt to send that over a network.

                Its a large enough problem sending 2d over the network sending 3d over the network is in the camp of mega ouch. Yes 3d over the network is basically outside everyone normal budget and by the time networks solutions come cost effective todo current day 3d over the network that future time 3d will be wanting way more..

                2d over network is basically the limit even then current day networking can come up short.

                Comment


                • #48
                  Originally posted by oiaohm View Post
                  Even in Quake 3 day you have AGP(1997) at 2133 MB/s and pci 32 bit 266 MB/s yes both of these are AGP multi out 17064Mbps so over 10Gbps and Pci 32 bit over 2128Mbps or just over 2Gbps and this is 1995. Quake 3 is released 1999.
                  Except that, this is the theory max bandwidth of the connection, not necessarily the bandwidth actually used by the game.

                  It happens that even during the Quake 3 release cycle, there were graphic AGP cards still on the market that didn't support direct RAM access (e.g.: 3DFx Voodoo 3/4/5 - they would only ever be able to look up textures from VRAM).
                  Thus, e.g., textures were loaded once at level start, and then only vertex data would be sent. That vertex data didn't saturate the full 2 GB/s of AGP.

                  Originally posted by oiaohm View Post
                  Basically you are all the way back as ISA 8Mbps for 8 bit version and 16Mbps for 16 bit version. In ID developer made games you are looking at Wolfenstein dos version and the dos versions of doom.
                  You could add Duke Nukem 3D , Descent and multiple other games to the list of 3D games with an engine able to software render to an ISA attached graphic card.

                  Few except the highest range or later of VGA/SuperVGA cards would saturate the ISA 16bit bandwidth. Most of the earlier VGA card didn't have extremely fast VRAM, so most of the earlier 3D engine had also to fight against that, too.
                  Basically the bus' bandwidth wasn't the main choke point, the VRAM bandwidth was.

                  From a purely theoretical point of view, as long as you have a Pentium-class CPU, you could draw Quake 1 and 2 on a ISA 16-bit attached VGA. (Except that, by then, most motherboard would be using PCI as the graphic card connection).
                  Still, the main choke point wasn't the bus speed, but the math speed inside the CPU and the CPU's bandwidth to the various data structures.


                  TL;DR: using the bus speed on very old software and games isn't that much relevant.

                  But I get your point you're trying to make :

                  Originally posted by oiaohm View Post
                  Basically back in 1987 when X11 was made and 1992 Opengl first version 1.0 release date doing stuff over the network made some kind of sense.

                  GLX was first implemented in 1992 with opengl version 1.0. So it made sense then to go over network. Yes by opengl version 1.1 GLX over network no longer made sense and it would not work..
                  Yup, old opengl software didn't push that many things so GL over the network was still realistic back then. Whereas piping Witcher 3's command stream over the network is insanely stupid.

                  Which also makes sense if you keep in mind that openGL was started in an era of graphical workstation (not plug-in accelerator boards). Rendering over the network (connecting to some SGI server) would be the expected setup in some situation.

                  For a long time X11 worked over network because applications were using X11 2d rendering that had been in fact designed to go over the network problem is most modern day applications like firefox, gimp, libreoffice.... are in fact using opengl and expecting tons of bandwidth.
                  Yup, indeed, things have change, essentially going from "draw this list of primitive" (followed by a couple of flat-rectangle that basically cover the whole windows, all only describe in less than a dozen of numbers) to "just blit this giant bitmap" (followed by literary the data of said bitmap, definitely much more data) to nowadays a whole openGL scene.

                  Comment


                  • #49
                    Originally posted by oiaohm View Post

                    Even in Quake 3 day you have AGP(1997) at 2133 MB/s and pci 32 bit 266 MB/s yes both of these are AGP multi out 17064Mbps so over 10Gbps and Pci 32 bit over 2128Mbps or just over 2Gbps and this is 1995. Quake 3 is released 1999.

                    Historic Vesa Local Bus before then at its slowest clock works out to 200Mbps and it faster 640Mbps this is 1993.

                    Basically you are all the way back as ISA 8Mbps for 8 bit version and 16Mbps for 16 bit version. In ID developer made games you are looking at Wolfenstein dos version and the dos versions of doom.

                    Basically back in 1987 when X11 was made and 1992 Opengl first version 1.0 release date doing stuff over the network made some kind of sense.

                    GLX was first implemented in 1992 with opengl version 1.0. So it made sense then to go over network. Yes by opengl version 1.1 GLX over network no longer made sense and it would not work..

                    Its been over 2 decades since it made sense send 3d graphics instructions over network.

                    The assumption of tones of bandwidth is not a new one.

                    For a long time X11 worked over network because applications were using X11 2d rendering that had been in fact designed to go over the network problem is most modern day applications like firefox, gimp, libreoffice.... are in fact using opengl and expecting tons of bandwidth.

                    Libreoffice online is in fact using a image based system to reduce bandwidth so render on server send image update across network. If you want to look at something designed to deliver 3d graphics over slow network connections all you have to look at is webgl.

                    Yes you would basically have to bundle up the opengl commands into running scripts and send those scripts across the network this is not how the opengl protocol is designed to work. You would fairly much end up writing you program like a webgl scripting engine in front of the user with a server back-end.

                    Stuff like RDP, VNC, VirtualGL are in fact design to work on the sub 100Mbps network connection. Even your digital monitor updates are base on 2D not 3D.

                    With large screens we are starting to push another limit where the speed need to-do 2d is getting insane. The recent displayport 2.0 77.57 Gbit/s and all for is 2d image based updating. You don't really want to consider how much 3d data you have to throw around to fill that and then attempt to send that over a network.

                    Its a large enough problem sending 2d over the network sending 3d over the network is in the camp of mega ouch. Yes 3d over the network is basically outside everyone normal budget and by the time networks solutions come cost effective todo current day 3d over the network that future time 3d will be wanting way more..

                    2d over network is basically the limit even then current day networking can come up short.
                    I don't think the history matters that much here, and it's a matter of designing around the limitations. If you assume you have plenty of bandwidth, things will of course run slow over the network, and since that's the setup that was popular and became the standard, piping something like OpenGL over the network got out hand pretty quickly. If the thin client idea took off back in the day, software written these days would be quite different, and it could be the case that games are designed with a given (low) bandwidth budget for OpenGL.

                    Comment


                    • #50
                      Originally posted by DoMiNeLa10 View Post
                      I don't think the history matters that much here, and it's a matter of designing around the limitations. If you assume you have plenty of bandwidth, things will of course run slow over the network, and since that's the setup that was popular and became the standard, piping something like OpenGL over the network got out hand pretty quickly. If the thin client idea took off back in the day, software written these days would be quite different, and it could be the case that games are designed with a given (low) bandwidth budget for OpenGL.
                      If you look at the old games when bandwidth was low they were doing more processing in the cpu. So more processing application side.

                      How you would be designing advanced applications would only be minor-ally changed. It would still mostly be 2d over the wire. There was a timeframe when the thin-client idea did take off and this is what lead to virtualgl and stuff like that. Splice is another example: https://en.wikipedia.org/wiki/Simple...g_Environments

                      Basically the only thing that would be different if thinclients took off back in the day and stayed dominate is we would have need bigger and more power CPUs so we would be seeing massive number of core cpus rivalling gpus. Most of the application graphical processing in thinclient would have remained server side not in the thinclient.

                      Software style yes would be different today but where in the network layout the most of the processing would be done would not have moved at all. Yes the change you are talking about only really would move the processing load from the GPU to the CPU in the server and that is it. Funny part is one of the prototype risc-v designs is talking about doing exactly that where you don't have GPU/CPU you have a massive multi core CPU that can do the GPU workload effectively. So long term the outcome of your theoretical historic change is most likely a total nothing change.

                      Comment

                      Working...
                      X