Announcement

Collapse
No announcement yet.

LXQt Now Has Full Qt5 Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    The solaris was a headless server that only ran the app, the X server was on my local desktop running Gentoo Linux. And it was definately using KMS. Maybe you are confusing it with OpenGL indirect rendering? I don't really care about that.

    Comment


    • #42
      Originally posted by caligula View Post
      You're wrong. The network transparency property does not state anything about bandwidth requirements. I see what you mean, but it's just not true. Given a fast enough connection even GTK3/Qt5 apps are usable over a network and look exactly the same. They don't send screenshots like you said. For example if I use MacOSX X client, I can freely move the windows and it does not repeat sending the window contents. That's when you remotely use apps & windows, not desktop. Even if RDP/VNC require less bandwidth IMHO they're just screen cloning apps and not network transparent in any way.

      The connection bw is not a problem anymore. You can update 1920x1080 truecolor screen 20 times per second in a gigabit lan. GTK2/Qt4 still support some X features which means theey don't update whole screen so you actually get decent FPS unless you run a browser/video on fullscreen. I've tried it.
      network transparency only ever worked with motif, your X Server is sending pixmaps if the GTK/Qt apps are been seen remotely and as far as i know damage extension don't work over the networked pixmaps(i can be wrong here) and X don't compress those pixmaps either, your example only works because you have enough bandwidth to do so aka LAN, his point is to use X pixmaps over network just use VNC/RDP/etc that are way more efficient that X will ever be.

      Network transparency means render on client side.(not possible in modern X with any toolkit because only motif was supported and the code got broken years ago)
      Network enabled means render locally and send raw resulting front-buffer pixmap to the client(what it works today)

      Comment


      • #43
        Originally posted by Ansla View Post
        The solaris was a headless server that only ran the app, the X server was on my local desktop running Gentoo Linux. And it was definately using KMS. Maybe you are confusing it with OpenGL indirect rendering? I don't really care about that.
        well you either had a version previous to the broken code or is using pixmaps, connect to your server and monitor the connection with wireshark, if you see huge blocks of binary data(500Kb-2Mb) is the latter if you see small bursts of binary data(25Kb - 150Kb) quite often then your version is still functional don't upgrade

        Comment


        • #44
          Originally posted by jrch2k8 View Post
          Network transparency means render on client side.(not possible in modern X with any toolkit because only motif was supported and the code got broken years ago)
          Network enabled means render locally and send raw resulting front-buffer pixmap to the client(what it works today)
          You are confused. Where the heck did you get that definition? I did a rather extensive search on web and the terminology means using remote apps like local apps in a transparent manner. The definition is abstract and doesn't mention rendering in any way.


          "X features network transparency: the machine where an application program (the client application) runs can differ from the user's local machine (the display server)"


          "Network transparency. Network transparency rocks! Run a program on a remote system and interact with it on your local terminal; write a program an not need to care whether it's going to be run on a full workstation or a dumb terminal. Some may say this is unimportant, but when one looks at the development of Windows and the evolution of RDP, it starts to look a lot more like X in terms of its features."

          Comment


          • #45
            Originally posted by caligula View Post
            You are confused. Where the heck did you get that definition? I did a rather extensive search on web and the terminology means using remote apps like local apps in a transparent manner. The definition is abstract and doesn't mention rendering in any way.


            "X features network transparency: the machine where an application program (the client application) runs can differ from the user's local machine (the display server)"


            "Network transparency. Network transparency rocks! Run a program on a remote system and interact with it on your local terminal; write a program an not need to care whether it's going to be run on a full workstation or a dumb terminal. Some may say this is unimportant, but when one looks at the development of Windows and the evolution of RDP, it starts to look a lot more like X in terms of its features."
            http://www.x.org/wiki/Development/X12/
            nice wikipedia, well the only way to run an application in system A but get the output in system B are:

            1.) send a raw bitmaps of the resulting rendered front-/back-buffer over a medium that interconnects system A and B
            2.) send the raw render commands over a medium that that interconnects system A and B so system B can render locally the output and return the input to system A

            there is no 3 option where stuff magically pop in system B, btw render is not a 3D terminology or GPU terminology it just means draw this on the screen no matter the method or hardware involved to do so

            Comment


            • #46
              Originally posted by jrch2k8 View Post
              nice wikipedia, well the only way to run an application in system A but get the output in system B are:

              1.) send a raw bitmaps of the resulting rendered front-/back-buffer over a medium that interconnects system A and B
              2.) send the raw render commands over a medium that that interconnects system A and B so system B can render locally the output and return the input to system A

              there is no 3 option where stuff magically pop in system B, btw render is not a 3D terminology or GPU terminology it just means draw this on the screen no matter the method or hardware involved to do so
              Look I'm quite familiar with network transparent technologies on Linux like NAS audio system ( http://en.wikipedia.org/wiki/Network_Audio_System ), PulseAudio ( http://en.wikipedia.org/wiki/PulseAudio ), AFS file system ( http://en.wikipedia.org/wiki/Andrew_File_System ), Zeroconf ( http://en.wikipedia.org/wiki/Zero-co...ion_networking ) and so on. With X it boils down to setting a screen env variable before starting an app. If you have permissions and network set up correctly, you can use any system on the network for displaying the app and it runs in local system (which can be virtualized, distributed whatnot). I don't give a rat's ass about rendering technology, it's abstracted away when it comes to network transparency on app level. When you invent some new tech like DRI1, DRI2, DRI3, EGL, whatnot, you can choose to make it network transparent if you want. It's up to the X subsystem how to do it. The app doesn't know how it's done. Now that people are relying more and more on GPU rendering, the technologies don't necessarily have a network transparency compatibility plugin. They could have. You could even send OpenGL messages via a socket. No problem with that. They just don't see the need to implement that.

              Comment


              • #47
                Originally posted by panda84 View Post
                You don't know what you're talking about:
                http://askubuntu.com/a/359870
                I meant a replacement for the network capabilities that X had (or more specifically Xpra) that can be used with Wayland whether it is part of Wayland or not. I heard xpra uses something like bencoding for this instead of the X11 protocol and that's fine. I don't care about the internals as long as the user experience is the same and works on low-end networks.

                Comment


                • #48
                  We'll probably need a new protocol for compositors to use for network based capabilities otherwise different compositors will end up incompatible with each other. It doesn't need to be part of Wayland. It just needs to provide the same user experience as xpra.

                  Comment


                  • #49


                    Xpra might get wayland support. If it works with other compositors then this will be perfect.

                    Comment


                    • #50
                      Originally posted by caligula View Post
                      Look I'm quite familiar with network transparent technologies on Linux like NAS audio system ( http://en.wikipedia.org/wiki/Network_Audio_System ), PulseAudio ( http://en.wikipedia.org/wiki/PulseAudio ), AFS file system ( http://en.wikipedia.org/wiki/Andrew_File_System ), Zeroconf ( http://en.wikipedia.org/wiki/Zero-co...ion_networking ) and so on. With X it boils down to setting a screen env variable before starting an app. If you have permissions and network set up correctly, you can use any system on the network for displaying the app and it runs in local system (which can be virtualized, distributed whatnot). I don't give a rat's ass about rendering technology, it's abstracted away when it comes to network transparency on app level. When you invent some new tech like DRI1, DRI2, DRI3, EGL, whatnot, you can choose to make it network transparent if you want. It's up to the X subsystem how to do it. The app doesn't know how it's done. Now that people are relying more and more on GPU rendering, the technologies don't necessarily have a network transparency compatibility plugin. They could have. You could even send OpenGL messages via a socket. No problem with that. They just don't see the need to implement that.
                      Well, if you care only to start an app and get the output on the other side no matter how, well X with pixmap do the job, Teamviewer do the job, RDP can do the job with a bit of magic in the compositor, Wayland can do the job using a network buffering render backend,etc. About that you are right, now about efficiency most are simply terrible, so the point of Wayland devs is they won't do it because most of those previous techniques are horrible and it would be more efficient to do this with command streaming at user level preferably handled by toolkits since is the most efficient place to do it.

                      Your previous examples don't relate to this conversation since those are either simple streaming or non-realtime small protocols and modern OpenGL can't be done over network or over sockets, sure in the old age of OpenGL vs GLide was possible because the hardware only had fixed functions but today will be massive and probably won't ever render properly across systems due to GLSL nature.

                      Comment

                      Working...
                      X