Announcement

Collapse
No announcement yet.

GNOME Shell Now Works With Software Rendering!

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • GNOME Shell Now Works With Software Rendering!

    Phoronix: GNOME Shell Now Works With Software Rendering!

    There's some great news today: it's now possible to run the GNOME Shell with Mutter but not having to rely upon any GPU hardware driver! Software rendering is now working with GNOME Shell rather than any fall-back thanks to improvements with Gallium3D's LLVMpipe...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Just in case anyone is wondering (or worrying) about the future of the traditional Fallback:

    Fallback mode will still be around for a while, and if software rendering does not work out, or only works under certain circumstances, we will adapt the blacklisting mechanism that is currently used for fallback mode to only try the full experience when it has a chance of success, and continue to fall back to fallback mode in problematic cases.

    Comment


    • #3
      I wonder if there is anything that could be done at the network level (such as with NoMachine's NX protocol) to further improve the performance when trying to use gnome shell over the public internet?

      I think the most efficient thing to do would be to have the 3d rendering with llvmpipe done on the server with an up-to-date CPU (first-gen Nehalem or newer) and then have the NX protocol figure out an efficient way to send the rendered objects to the proxy X server on the client side.

      This could really get rather complicated; the X proxy on the client-side may end up implementing more parts of the X.org wire protocols in order to robustly support compositing. The key will be to "intelligently" inform the client of what the data is and what transformations are made, so that the *client* can do some of the transformations itself. That's what makes the NX protocol so efficient as it is: you can drag around a window on a 2D desktop with NX at near-native speed over the public Internet, because your *client* is doing much of the calculation for the rendering itself, because it knows about the X protocol, and it caches pixmaps and such. It's not just another remote framebuffer.

      Then again, if we can have 3d rendering done server-side and somehow efficiently transfer the results to the client, then it would be possible (eventually, with innovations in NX) to have dedicated GPU hardware do GPU-intensive tasks on a mainframe-like server, and transmit the results over a network (say, a WiFi LAN) to a much less beefy, more mobile device, such as a tablet.

      So you'd have your tablet on your desk in a meeting doing apparently amazing, computationally-intensive real-time 3d rendering, but the actual legwork would be done on the server.

      I'm sure projects like these are all over the place in academia and industry now that tablets are becoming mainstream, and general-purpose GPUs are becoming popular workhorses in business.

      I'm just waiting for the likes of Red Hat or GNOME to come up with something similar for general purpose free desktop usage, rather than settling with the first proprietary vendor to actually market something like this and make it easy to use.

      Anyway, I'm way off the topic of llvmpipe doing software gnome-shell, so yay for that. I'm sure that people running high-end servers with no GPU will appreciate being able to run g-s on the local console (or in the KVM-over-IP framebuffer, or whatever) for those times when you just have to use a graphical program.

      Comment


      • #4
        LLVMpipe still isn't fast enough for many OpenGL games..
        Nor should it be.

        Comment


        • #5
          Originally posted by allquixotic View Post
          I wonder if there is anything that could be done at the network level (such as with NoMachine's NX protocol) to further improve the performance when trying to use gnome shell over the public internet?

          I think the most efficient thing to do would be to have the 3d rendering with llvmpipe done on the server with an up-to-date CPU (first-gen Nehalem or newer) and then have the NX protocol figure out an efficient way to send the rendered objects to the proxy X server on the client side.

          This could really get rather complicated; the X proxy on the client-side may end up implementing more parts of the X.org wire protocols in order to robustly support compositing. The key will be to "intelligently" inform the client of what the data is and what transformations are made, so that the *client* can do some of the transformations itself. That's what makes the NX protocol so efficient as it is: you can drag around a window on a 2D desktop with NX at near-native speed over the public Internet, because your *client* is doing much of the calculation for the rendering itself, because it knows about the X protocol, and it caches pixmaps and such. It's not just another remote framebuffer.

          Then again, if we can have 3d rendering done server-side and somehow efficiently transfer the results to the client, then it would be possible (eventually, with innovations in NX) to have dedicated GPU hardware do GPU-intensive tasks on a mainframe-like server, and transmit the results over a network (say, a WiFi LAN) to a much less beefy, more mobile device, such as a tablet.

          So you'd have your tablet on your desk in a meeting doing apparently amazing, computationally-intensive real-time 3d rendering, but the actual legwork would be done on the server.

          I'm sure projects like these are all over the place in academia and industry now that tablets are becoming mainstream, and general-purpose GPUs are becoming popular workhorses in business.

          I'm just waiting for the likes of Red Hat or GNOME to come up with something similar for general purpose free desktop usage, rather than settling with the first proprietary vendor to actually market something like this and make it easy to use.

          Anyway, I'm way off the topic of llvmpipe doing software gnome-shell, so yay for that. I'm sure that people running high-end servers with no GPU will appreciate being able to run g-s on the local console (or in the KVM-over-IP framebuffer, or whatever) for those times when you just have to use a graphical program.

          While it isn't finished yet, this might be a viable option since it allows command submission and also uses efficient compression.
          This document presents a survey of VDI (Virtual Desktop Infrastructure) system. Current popular VDI solutions are focused on, such as Microsoft RDS, Citrix XenDesktop, Redhat enterprise virtualization for desktops and VMware View. By analyzing the architecture and protocol flows of these solutions, the common features of VDI architecture and protocol are summarized.

          That provides a concise explanation of the major protocols (though it does exclude NX).

          Comment


          • #6
            Michael, you mention "While most have open or closed-source 3D drivers available for their GPU, this will be of use to those with QEMU guests where hardware acceleration is lacking, those using Intel Poulsbo on open-source, etc."

            I'm wondering, what is the state of "true" hardware acceleration on the different virtual machines/environments? QEMU, KVM, vmware, Citrix, VirtualBox etc? It seems to me that vmware does have true hardware acceleration, but only up to OpenGL2.1. What goes for the other ones?

            It would be very interesting to see a comparison of the different virtual machines, including performance tests etc! If there already are tests like that, I'd really appreciate a link.

            Comment


            • #7
              For virtualbox, see this for example: https://www.virtualbox.org/attachmen...35/glxinfo.txt

              Comment


              • #8
                Originally posted by DanL View Post
                For virtualbox, see this for example: https://www.virtualbox.org/attachmen...35/glxinfo.txt
                Thanks! Is that real OpenGL GPU hardware acceleration, or OpenGL in software? Since it says Direct Rendering: Yes, it's in hardware, right?

                Comment


                • #9
                  Originally posted by Hamish Wilson View Post
                  Just in case anyone is wondering (or worrying) about the future of the traditional Fallback:

                  https://fedoraproject.org/wiki/Featu...ntingency_Plan
                  These are good news for anybody else, but horrible news for Ubuntu. AFAIK their Unity shell depends on GNOME fallback mode, so, if GNOME fallback mode itself falls into irrelevance, so does Ubuntu and their non-Unity-bar UI parts. Ubuntu would have to choose between non-maintained software (GNOME 2.24), abandoned software (GNOME 3 fallback mode) or GNOME Shell.

                  Comment


                  • #10
                    Originally posted by Hamish Wilson View Post
                    Just in case anyone is wondering (or worrying) about the future of the traditional Fallback:



                    https://fedoraproject.org/wiki/Featu...ntingency_Plan
                    FYI: That just means that FEDORA won't blow it out of the water. Unfortunately, they don't speak for GNOME. This is just the excuse that GNOME needs to screw everybody over.

                    Comment

                    Working...
                    X