Announcement

Collapse
No announcement yet.

Wayland & The Network; Gallium3D Netpipe?

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Wayland & The Network; Gallium3D Netpipe?

    Phoronix: Wayland & The Network; Gallium3D Netpipe?

    In recent days on the Wayland development mailing list there's been a discussion about a HPC (High Performance Compute) architecture for Wayland. A few interesting ideas have been brought up...

    http://www.phoronix.com/vr.php?view=OTIyNw

  • #2
    what would be the benefit of having something like that in the driver????

    i mean the goal should be -or at least that makes sense to me- to have something that will be platform/OS/toolkit independent

    Comment


    • #3
      don't forget there are x11 state trackers too... so this would also be a new render path that X11 could go on... I've acutally ran the r300g xorg state tracker it seemed a bit more buggy at the time a few weeks ago but it did work

      Comment


      • #4
        Originally posted by cb88 View Post
        don't forget there are x11 state trackers too... so this would also be a new render path that X11 could go on... I've acutally ran the r300g xorg state tracker it seemed a bit more buggy at the time a few weeks ago but it did work
        yes but does X need that??

        i mean X can run on many platforms and do its network stuff without a problem and will be legacy software sooner or later (hopefully sooner)

        Comment


        • #5
          What I've been wishing a long time is a way to offload HD video decoding/postprocessing to a more powerful machine. I once tried to use X forwarding for this purpose, but discovered that my gigabit LAN didn't have enough bandwidth for uncompressed video streaming. Also, nvidia proprietary video acceleration refused to work this way - it only supports rendering directly to the monitor attached to the videocard.

          Comment


          • #6
            GSOC

            It isn't surprising no one has stepped up yet for the GSOC project, applications are not supposed to start coming in until the 28th. Some people seem to be jumping the gun a little but we still have a few weeks.

            Comment


            • #7
              Originally posted by kirillkh View Post
              What I've been wishing a long time is a way to offload HD video decoding/postprocessing to a more powerful machine. I once tried to use X forwarding for this purpose, but discovered that my gigabit LAN didn't have enough bandwidth for uncompressed video streaming. Also, nvidia proprietary video acceleration refused to work this way - it only supports rendering directly to the monitor attached to the videocard.
              You probably want to offload decode to a more powerful machine but postprocessing aka render aka present should be on your local machine because of the large volume of data and because generating the result directly in the frame buffer saves a lot of big copies.

              Comment


              • #8
                Sure, if that's the best way to split the work.

                Comment


                • #9
                  Originally posted by bridgman View Post
                  You probably want to offload decode to a more powerful machine but postprocessing aka render aka present should be on your local machine because of the large volume of data and because generating the result directly in the frame buffer saves a lot of big copies.
                  Whadever you do, you're going to have to compress the output from the screen/framebuffer and then transmit that to the device. Working with video will ask for that because nothing does support that.

                  I'm surprised nobody is doing anything based on Kernel Virtual Machine.
                  Making that network transparent would be a very universal solution.

                  I also see that there are two things: sending calls, which does require little bandwidth and sending pieces of or the whole screen(buffer-s) to it. That will require a video codec that can do streaming.

                  And the best solution is of course a protocol specification that can do both because in a program. Doing both can be efficient as possible while allowing all kinds of content.

                  All this stuff about X.Org doing that. We need a linux kernel infrastructure for this because we need a universal system. KVM would be great for that.

                  Comment


                  • #10
                    Originally posted by plonoma View Post
                    I also see that there are two things: sending calls, which does require little bandwidth and sending pieces of or the whole screen(buffer-s) to it. That will require a video codec that can do streaming.
                    I don't know about that....

                    A stream of video would require that data be constantly sent from the server to the client. If you want to be sure the stream isn't corrupted, you'd need either TCP or some other mechanism to verify the data, meaning extra round-trips (adding latency). You could just as easily do the same with individual (and smaller) frames that represent only a portion of the screen, sent with coordinates and a checksum over UDP (or some other simple protocol) and use a lot less bandwidth and have lower latency. If ever there is corruption (the checksum doesn't match), the client could send back a request for the full screen, and then it could continue from there with the small frames.

                    The problem with that is that, on unstable/unreliable connections, the client could potentially "freeze" (no input, keyboard or mouse, would do anything to the widgets on the screen) and nothing (except the mouse, if it's rendered locally) would move or animate (if it was in the first place). The client could, of course, detect that it's no longer receiving any frames/updates and display a notice to the user, say, "No longer receiving communications from server, connection may be unstable or broken". Don't ask me how the client would reconnect; that'd be implementation specific (depending on whether it's the whole desktop being remoted, full-screen; just an app; or the whole desktop within a "virtual desktop" window).

                    Comment

                    Working...
                    X