Announcement

Collapse
No announcement yet.

X.Org Server & XWayland Hit By Four More Security Issues

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by mSparks View Post
    In case you missed it, google recently tried to commercialise a serialised full modern graphics stack. maybe you didnt hear about it? they called it stadia.

    The key limitation for anything more advanced than early opengl is latency, similar to gles because of gpu limitations.

    Could we have better than GLX? sure, but key developers were too busy wasting their time with wayland on ideas that turned out to be hyperbole.
    O dear come in sucker writing a stack of false. Mind you mSpark got closer with google stadia then missed the memo that over network to clients stadia is 2d but google did make a thing called clustergl and that kind of important because its tree of tech clustergl owns to that dooms GLX over network possible major use cases..

    Originally posted by ReaperX7 View Post
    Technically GLX doesn't "need" to be improved. It's mainly for networked X displays to draw with OpenGL post process. You technically, on your own desktop, never use GLX. That's why if you see the subset of function calls GLX has, it's VERY limited. If you run X through a terminal from a server via network login and then use a program running OpenGL over a network, then you will be using GLX.

    It's mainly meant for small form factor units with severely limited GPUs in Terminals that mainly just draw a 2D display, and receive OS data in streaming format from a Terminal Server.
    The GPU in terminals you don't use for even opengl 1.4 stuff for 3d if possible because you cause a lot of them to overheat if you do this is one of the reasons why remote desktop options that have to run on thin clients end up going the 2d rendering route. Lower power usage and lower thermals in the client going the 2d route. 3D Rendering on your mobile phone you also want to avoid as this will reduce your battery life more than 2d rendering.

    The ideas the GLX does not need to improved and was never improved are both false. The trap is the developers working on GLX moved on to a solution that could work once they worked out what GLX fundamental problem was when it came to network rendering.

    Like I bet neither of you know of equalizer. https://eyescale.github.io/equalizergraphics.com/ this one is still maintained and used.
    Or legacy ones like google clustergl and Standford wiredgl also known as chromium https://chromium.sourceforge.net/doc/

    These are solutions for really using network connected GPUs. What the major difference. GLX is a 1 to 1 encode of opengl commands the latter ones like all the ones I listed above are not 1 to 1 encoding they optimize the messages over network and can smartly slice up workloads between systems so you don't end up filling the network up with a stack of messages going backwards and forwards across the network stalling out the GPU from doing anything..

    X11 network encoding don't help you to use network connected GPU effective either as the X11 information take up space in the network packets that could be used for more effectively.

    These new solutions like wiregl/chromium appear in 2001. Do note equalizer can use Opengl 4.6 functionality as in current opengl but you do need to alter your applications to make sure you are not flooding the network with pointless messages.

    Doing Opengl locally you can be very chatty between the GPU and the application with no issue due to the large bandwidth a PCIe/AGP/.. connection is and the insanely short amount of time each of those messages cost. Over network being very chatty end up having like 90%+ of your possible time for processing disappearing into network communication instead of rendering this is the network GLX problem and why GLX over network a failure.

    There are different groups attempting to make items like equalizer for vulkan these days.

    Like it or not local and over network have major difference in message latency and this massive effects how many messages you can get away with sending without causing performance problems this is why GLX over network is a failure.

    Every possible use case for GLX over network falls into 1 of 2 answers of what you should do instead of using GLX over network if you want it to work well.
    1) Its better to send 2d from local to the application gpu/cpu over the network because for power usage, latency .... this is going to be better than GLX for the end user on thin terminals mobile phones and the like..
    2) Its better to alter you application to use equalizer or equal so that you message between GPU and application is optimized for the network use case so it can in fact perform. This is going to be better for those who need distributed processing/lots of GPU processing.

    Both of these options basically make GLX over network superseded technology because it design is fundamentally broken. GLX over network failed to take account of the limitations sending over network cause so making GLX over network very poor performing.

    Fairly much every-time someone claims to have a example of GLX working well over network they have missed where it locally to application rendering and being screen scrapped in some form so turning what sent over the network in 2d data(non opengl data).

    mSparks like it or not the developers who made GLX over network went on and worked on other projects that superseded GLX over network and in fact perform well with the annoying catch you have to re-code your application to use them.

    XPRA does not contain and of the true render over the network opengl support. XPRA takes option 1 render local to application then send 2d of the opengl window over network.

    GLX over network is very much another failed design. Yes in the lines of EGLStreams. The base ideas of both GLX over network and EGLStreams sound great but the base implementations have a fundamental flaw in execution making the complete thing basically useless and the way to fix it is go back and rewrite the thing from scratch without the fundamental flaw..

    Remember virtualgl software rendering locally and sending 2d over the network is going give more real frames per second to the users eyes than using GLX over the network. It does not matter how powerful your GPU is the problem is the how chatty opengl is by default you end up bogged down in network communication with GLX over network with all the messages opengl wants to send from the GPU to the client and from the client to the GPU.

    Yes people trying to push X11 server as need try to push the GLX over network without being in the know that GLX over network just a stack of broken implemented garbage that has been superseded in two different ways that are effective solutions that do perform well .

    Yes by 2001 it was coming clear that GLX over network would not have a future and it was pointless to keep on adding more opcodes to GLX to support newer opengl. Yes this is before Wayland is even a concept. Basically both of you are 2 decades+ worth of development out of date.

    Comment


    • Originally posted by mSparks View Post

      And that is why XPRA is superior to most other solutions. "however" the FPS for the most part is going to depend on the OS ability to render HTML, since all the elements are mostly HTML5


      Obviously it was not being rendered at 2000fps - the display can only manage 60fps for a start.
      The point is how quickly the application can run, and how responsive the UI is.

      I'd be surprised if webGL isn't ingesting the GLX directly, if it turns out to be true I might put that down as things to do for a christmas project, seems like it would be useful, but again, I would be fairly amazed if that wasn't the current situation.
      Be amazed!

      WebGLRenderer: https://github.com/Xpra-org/xpra-htm...smpeg.js#L3658
      Video to WebGL canvas renderer init: https://github.com/Xpra-org/xpra-htm...jsmpeg.js#L297

      But that's about it concerning WebGL stuff in the client codebase as it appears to only be used for mpeg1 stream display: https://github.com/Xpra-org/xpra-htm...ndex.html#L939

      Here you have the packet handlers... curiously I can't find anything but info, control and 2d image/video handlers. https://github.com/Xpra-org/xpra-htm...Client.js#L461

      XPRA HTML5 client Window representation: https://github.com/Xpra-org/xpra-htm.../Window.js#L27

      Note the comment:

      This is the class representing a window we draw on the canvas.
      It has a geometry, it may have borders and a top bar.
      The contents of the window is an image, which gets updated
      when we receive pixels from the server
      .​

      To me it looks like you've got your xmas project plans covered.

      Comment


      • Originally posted by oiaohm View Post
        *snipped one incoherent rant*
        Please use punctuation. You (probably) know what you meant, but you may be the only one.

        Comment


        • Originally posted by mSparks View Post

          In case you missed it, google recently tried to commercialise a serialised full modern graphics stack. maybe you didnt hear about it? they called it stadia.

          The key limitation for anything more advanced than early opengl is latency, similar to gles because of gpu limitations.

          Could we have better than GLX? sure, but key developers were too busy wasting their time with wayland on ideas that turned out to be hyperbole.
          Just pointing out that there are thousands of developers and, especially volunteers, work on what they want without any obligation to you or what you think is good or should work. You're free to contribute where you want. It's curious how the display stack developers prefer wayland though...

          Comment

          Working...
          X