Announcement

Collapse
No announcement yet.

Not All Linux Users Want To Toke On LLVMpipe

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Not All Linux Users Want To Toke On LLVMpipe

    Phoronix: Not All Linux Users Want To Toke On LLVMpipe

    OpenGL support is becoming an increasing hard requirement on the Linux desktop. Even if your hardware comes up short, more desktops are requiring GL support, which means falling back to the CPU-based LLVMpipe Gallium3D driver...

    http://www.phoronix.com/vr.php?view=MTIxMTg

  • #2
    This is probably the best headline ever to appear on Phoronix.

    I have no further comments.

    Comment


    • #3
      With VMware Workstation 9 and Player 5, 3D doesn't seem to work with guests on Linux hosts using Intel graphics. The usual installation of S2TC libraries or just enabling the reporting of S3TC support doesn't work anymore. VMware tech support tells me to disable 3D support or use Workstation 8. I certainly don't want to use software rendering of 3D effects in my Linux guests, if I can't make use of 3D in my Linux guests.

      Comment


      • #4
        "Users running old hardware won't get modern features"

        Uh duhhh.

        Next thing the self-entitled free-loading whiners are going to complain about is that their 1963 Corvettes doesn't have modern electronic conveniences and fuel economy. Or that their old Walkman can't play movies, browse the Internet, or do phone calls. Or that their Crosley Model 158 doesn't pick up digital stations. Or that their old NES can't play Gamecube games. Or that the slab of steak they bought three weeks ago went bad and isn't edible even after "installing" a dose of fresh A-1 sauce. Or how their 18 year old fit and trim figure is irrevocably gone and been replaced with 50 flabby pounds after a decade of sitting all day in front of a computer eating Cheetohs and pounding Mountain Dew while bitching online about the Free OS they downloaded for no cost and never contributed anything meaningful to.

        If you have ancient hardware, the ancient software made for that hardware still exists. Use that. It still works. If you want modern software, you're going to need something approaching the hardware it was designed for. That may cost some money. Suck it up and pay it. Or just stop using a computer and go do something else with your time (really, it's okay, you don't need to be on the Internet 24/7 and play the latest games and have the latest tech toys and see all the newest Youtube videos just to survive and have a good life, I promise).

        </rant>

        Complaints about ARM support and VMs are still valid. I don't agree that the desktop should be held back, though. Either someone needs to get their ass in gear and fix the problems there, or those use cases should just be honestly cast aside as unsupported. Obviously fixing the problems is the better solution, but if the FOSS community and its theoretical millions of contributors don't care enough to fix it, then there's not much more to say on the topic. The primary use cases of Linux in VMs and on ARM are still supported (servers that don't need graphics and mobile appliances that use specialized graphics stacks), and oddball use cases need to give away to more common use cases when there is a conflict (e.g., modern graphics features for most people vs weird desktop environments for a few people).

        Comment


        • #5
          I would like to subscribe to your newsletter, elanthis!

          Comment


          • #6
            He has missed one point in why you do not want LLVM on a server: if you really care about security, you really do not want in-memory code that is changeable and executable unless hard restricted in other ways. Which the current Mesa/swrast-over-LLVM needs without being very restricted. Which is one of the reasons you can hit problems with it on hardened/pax.

            That said: you really want to run X as it currently runs as *root* on a server anyway?

            Comment


            • #7
              Originally posted by elanthis View Post
              "Users running old hardware won't get modern features" Uh duhhh.
              It's not even that. It's more like, "Users running old hardware should get off the Gnome/Unity train and switch to a desktop with more accommodating devs (xfce/lxde/kde)."

              Comment


              • #8
                I didn't quite get it from the article: do modern servers ship with (very) old graphics cards like Matrox?

                Comment


                • #9
                  My understanding is that most servers ship with barely enough of a graphics chipset to get the BIOS to POST, and that's it.

                  Comment


                  • #10
                    Originally posted by MaxToTheMax View Post
                    My understanding is that most servers ship with barely enough of a graphics chipset to get the BIOS to POST, and that's it.
                    Yes, and the good intel HD GPU is not a part of Xeon E5 nor E7.

                    Comment


                    • #11
                      Originally posted by mark45 View Post
                      I didn't quite get it from the article: do modern servers ship with (very) old graphics cards like Matrox?
                      Michael's examples are a little dated, but only a little. The same principle still applies to newer server hardware, where it's often a newer chip but even weaker graphics capabilities. AFAIK, the main reason they ship with an actual graphics chip at all is to plug into a network KVM switch (or tie into an onboard remote management controller that does KVM-over-IP independently of the host CPU) so that you can get a remote display without relying on the OS to run a server for it.

                      Comment


                      • #12
                        Software and OpenGL compositing

                        It is not that difficult to provide two backend for doing compositing. Enlightenment use the same backend as any EFL application for the compositing. That means both software and OpenGL (also GL ES) are provided. Software backend is faster than OpenGL in many scenario and also more stable (has it put less pressure on the driver). It is possible to use a Pentium at 600Mhz to do software compositing on 1024x800 screen without any speed issue. In fact, OpenGL is not a 2D API and is really not as efficient as a software implementation could be.

                        Comment


                        • #13
                          Originally posted by bleubugs View Post
                          It is not that difficult to provide two backend for doing compositing.
                          There is far more to modern graphics than compositing. And even then, yes, it is more difficult. Not only is it two codepaths to write, test, and maintain; there's also a matter of enabling and disabling entire swathes of features based on the compositing backend, as naive software compositing is barely capable of handling straight up transparent "cutout", much less shadows, de-focus blur, transforms (including simple accessibility ones like maginification), and so on. So then there's another set of code that has to be written, tested, and maintained. If you want really fast compositing in software, you need to get a really fancy software graphics engine... like LLVMpipe. (Or WARP** for Windows/Direct3D... I'd like to see some benchmarks between WARP and LLVMpipe some time!)

                          All to support a tiny minority of users who haven't upgraded their machines in many years, or who are running oddball setups that aren't standard fair for a desktop stack anyway.

                          ** On the note of WARP, it's a very interesting design. It provides a very high performance fully D3D11-compliant software rasterizer. It was designed to mimic the performance of real hardware in such a way that you can test your app's performance characteristics on WARP and have it translate well to how it'll scale on real hardware (that is, find the bottlenecks of your app; obviously you'd expect the app to run faster overall on real hardware, but you shouldn't expect that a call that took 2% of the render time on WARP will suddenly take 10% of the render time on hardware). It's also being extended/reused in the upcoming enhanced PIX tools from Microsoft, giving you a way to step through and inspect shader code in your unmodified app, without needing specialized hardware setups (like NVIDIA's tools or the tools for the XBox360 ATI graphics chip) to do it. That's the kind of developer-oriented tools that the FOSS graphics stack should be focusing on ASAP; Linux's biggest draw on the desktop is as a developer machine, but yet it seems its developer tools are lagging further and further behind. Vim and GDB aren't exactly the cutting edge of developer tools anymore.

                          In fact, OpenGL is not a 2D API and is really not as efficient as a software implementation could be.
                          That is not at all true. OpenGL has a lot of problems, but there's no reason it can't do standard 2D rendering faster than software. 2D rendering on the CPU is just (in simplified terms) a big loop over a 2D grid doing a lot of SIMD operations. That's _exactly_ what the GPU is designed to excel at, and OpenGL is just an API that exposes that hardware to software.

                          That most of the FOSS drivers have major problems and that precious few FOSS desktop developers have the first clue how to actually use OpenGL properly to do fast 2D, that is possibly quite true. The vast majority of GUI toolkits and higher level 2D rendering APIs are written like it's still the 1990's and don't actually have a render backend that's capable of working the way a GPU needs them to work. It's entirely 100% possible to make one, though, and the speed benefits can be _massive_, not to mention the battery life benefits for mobile devices (letting your GPU do a quick burst of work is more efficient than having your CPU chug through it for a longer period of time).

                          The primary problem is a complete lack of batching (that is, many GUI toolkits do one or more draw calls per widget/control, where as it is possible to do a handful or even just one for the entire window) and a lack of understanding on how to use shaders to maximum effect. Even a lot of game-oriented GUI frameworks (for editor and such, or even ones designed for in-game HUDs and menus systems like ScaleForm) get this terribly wrong.

                          You have to design your render backend for a 2D system around how modern hardware actually works. You can't just slap an OpenGL backend into a toolkit that assumes it's using a software renderer.

                          Comment


                          • #14
                            Originally posted by elanthis View Post
                            There is far more to modern graphics than compositing. And even then, yes, it is more difficult. Not only is it two codepaths to write, test, and maintain; there's also a matter of enabling and disabling entire swathes of features based on the compositing backend, as naive software compositing is barely capable of handling straight up transparent "cutout", much less shadows, de-focus blur, transforms (including simple accessibility ones like maginification), and so on. So then there's another set of code that has to be written, tested, and maintained. If you want really fast compositing in software, you need to get a really fancy software graphics engine... like LLVMpipe. (Or WARP** for Windows/Direct3D... I'd like to see some benchmarks between WARP and LLVMpipe some time!)
                            Any graphic toolkit today provide a software backend, and using it in a compositor is just a matter of putting window content into an image and let that toolkit do all the transformation it want with it. I have no idea what KDE and GNOME are doing, but for Enlightenment, we are able to do just that with a very small team. For much bigger project like KDE and GNOME that have the strong backing of company, I would be surprised if they don't have the same infra as we have. As soon as you are able to have a "Scene Graph" in your toolkit, any compositor can just use it. There is no special case from, cutout, shadow, transform are already part of what the toolkit is able to do. The compositor don't have any special code to maintain for that. The toolkit does the job for you.

                            Originally posted by elanthis View Post
                            All to support a tiny minority of users who haven't upgraded their machines in many years, or who are running oddball setups that aren't standard fair for a desktop stack anyway.
                            Seriously, go out there. AMD provide shitty driver that make it barely usable with compositor. Nouveau is not in a good state. Intel has power management issue with some of their driver on i7 when used by a compositor. The less problematic one is latest NVidia (yeah, really the latest one, before you also have a lot of bug). The main reason is that people tend to focus on game for performance and benchmark, when providing a stable driver for compositing is more difficult to mesure. You may have the money to buy the hardware that work with your software, but sometimes it is nice to write software that people can use on their hardware.

                            Originally posted by elanthis View Post
                            That is not at all true. OpenGL has a lot of problems, but there's no reason it can't do standard 2D rendering faster than software. 2D rendering on the CPU is just (in simplified terms) a big loop over a 2D grid doing a lot of SIMD operations. That's _exactly_ what the GPU is designed to excel at, and OpenGL is just an API that exposes that hardware to software.
                            Most of the time you have only a small part of the screen that get updated. Just where you cursor is and that's it. For that in OpenGL, there is currently no extention for making sure that the backbuffer will be preserved between two buffer swap, there is no way to know the number of actual buffer the driver is cycling with and there is no way to do partial update on a recycled buffer. This means most of the time you do a full screen redraw when you only need maybe a 30 pixels by 30 pixels block to update. So you are doing a full screen update using a lot of memory bandwidth (direct impact when you have an integrated GPU) instead of just pushing a few pixels. Guess what, software is way faster in that case. This kind of extention would be easier to add on top of wayland protocol, but for now, we don't have that and this is directly impacting performance. 2D rendering is not about just dumbly walking a grid. Even with OpenGL you don't do that. You are walking a list of object to render and you take decision on what to render, what not and when. That logic is very true for both backend and can in fact be completely shared.

                            That's for the biggest cost. There is other case where OpenGL isn't as fast as a software implementation, but that just impact the size and complexity of the area you can update before the brute force of the GPU catch up.

                            Originally posted by elanthis View Post
                            not to mention the battery life benefits for mobile devices (letting your GPU do a quick burst of work is more efficient than having your CPU chug through it for a longer period of time).
                            That completely depend of the amount of stuff you need to process, the smaller the area, the less benefit you have. Powering up the GPU means, you have another huge number of core that are consuming power. There is a threshold on when to use it and when not. If you really care about performance and power consumption, having an hybrid engine and being able to switch from GL to software make sense in some case.

                            Originally posted by elanthis View Post
                            The primary problem is a complete lack of batching (that is, many GUI toolkits do one or more draw calls per widget/control, where as it is possible to do a handful or even just one for the entire window) and a lack of understanding on how to use shaders to maximum effect. Even a lot of game-oriented GUI frameworks (for editor and such, or even ones designed for in-game HUDs and menus systems like ScaleForm) get this terribly wrong.
                            This is definitively not true with EFL and I bet it is also the case with QT. That was maybe true 5 years ago, but not anymore. GUI toolkit that want to play nicely on embedded device have been forced to do that as soon as possible or they will just not been usable at all. Due to pressure to go into the embedded market, toolkit have put a lot of effort on power consumption and by side effect on performance (that's completely linked), and that space is rapidly changing and evolving. We are at a point today, where OpenGL API and X API is clearly an issue.

                            Comment


                            • #15
                              Originally posted by zxy_thf View Post
                              Yes, and the good intel HD GPU is not a part of Xeon E5 nor E7.
                              Intel® Xeon® Processor X5550 just says: integrated graphics
                              http://ark.intel.com/products/37106/...-GTs-Intel-QPI
                              which is in an HP ProLiant DL360 G7 with release date: 2011-06-21

                              But who runs a Linux Desktop on that?

                              Comment

                              Working...
                              X