Announcement

Collapse
No announcement yet.

Not All Linux Users Want To Toke On LLVMpipe

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by mark45 View Post
    I didn't quite get it from the article: do modern servers ship with (very) old graphics cards like Matrox?
    Michael's examples are a little dated, but only a little. The same principle still applies to newer server hardware, where it's often a newer chip but even weaker graphics capabilities. AFAIK, the main reason they ship with an actual graphics chip at all is to plug into a network KVM switch (or tie into an onboard remote management controller that does KVM-over-IP independently of the host CPU) so that you can get a remote display without relying on the OS to run a server for it.

    Comment


    • #12
      Software and OpenGL compositing

      It is not that difficult to provide two backend for doing compositing. Enlightenment use the same backend as any EFL application for the compositing. That means both software and OpenGL (also GL ES) are provided. Software backend is faster than OpenGL in many scenario and also more stable (has it put less pressure on the driver). It is possible to use a Pentium at 600Mhz to do software compositing on 1024x800 screen without any speed issue. In fact, OpenGL is not a 2D API and is really not as efficient as a software implementation could be.

      Comment


      • #13
        Originally posted by bleubugs View Post
        It is not that difficult to provide two backend for doing compositing.
        There is far more to modern graphics than compositing. And even then, yes, it is more difficult. Not only is it two codepaths to write, test, and maintain; there's also a matter of enabling and disabling entire swathes of features based on the compositing backend, as naive software compositing is barely capable of handling straight up transparent "cutout", much less shadows, de-focus blur, transforms (including simple accessibility ones like maginification), and so on. So then there's another set of code that has to be written, tested, and maintained. If you want really fast compositing in software, you need to get a really fancy software graphics engine... like LLVMpipe. (Or WARP** for Windows/Direct3D... I'd like to see some benchmarks between WARP and LLVMpipe some time!)

        All to support a tiny minority of users who haven't upgraded their machines in many years, or who are running oddball setups that aren't standard fair for a desktop stack anyway.

        ** On the note of WARP, it's a very interesting design. It provides a very high performance fully D3D11-compliant software rasterizer. It was designed to mimic the performance of real hardware in such a way that you can test your app's performance characteristics on WARP and have it translate well to how it'll scale on real hardware (that is, find the bottlenecks of your app; obviously you'd expect the app to run faster overall on real hardware, but you shouldn't expect that a call that took 2% of the render time on WARP will suddenly take 10% of the render time on hardware). It's also being extended/reused in the upcoming enhanced PIX tools from Microsoft, giving you a way to step through and inspect shader code in your unmodified app, without needing specialized hardware setups (like NVIDIA's tools or the tools for the XBox360 ATI graphics chip) to do it. That's the kind of developer-oriented tools that the FOSS graphics stack should be focusing on ASAP; Linux's biggest draw on the desktop is as a developer machine, but yet it seems its developer tools are lagging further and further behind. Vim and GDB aren't exactly the cutting edge of developer tools anymore.

        In fact, OpenGL is not a 2D API and is really not as efficient as a software implementation could be.
        That is not at all true. OpenGL has a lot of problems, but there's no reason it can't do standard 2D rendering faster than software. 2D rendering on the CPU is just (in simplified terms) a big loop over a 2D grid doing a lot of SIMD operations. That's _exactly_ what the GPU is designed to excel at, and OpenGL is just an API that exposes that hardware to software.

        That most of the FOSS drivers have major problems and that precious few FOSS desktop developers have the first clue how to actually use OpenGL properly to do fast 2D, that is possibly quite true. The vast majority of GUI toolkits and higher level 2D rendering APIs are written like it's still the 1990's and don't actually have a render backend that's capable of working the way a GPU needs them to work. It's entirely 100% possible to make one, though, and the speed benefits can be _massive_, not to mention the battery life benefits for mobile devices (letting your GPU do a quick burst of work is more efficient than having your CPU chug through it for a longer period of time).

        The primary problem is a complete lack of batching (that is, many GUI toolkits do one or more draw calls per widget/control, where as it is possible to do a handful or even just one for the entire window) and a lack of understanding on how to use shaders to maximum effect. Even a lot of game-oriented GUI frameworks (for editor and such, or even ones designed for in-game HUDs and menus systems like ScaleForm) get this terribly wrong.

        You have to design your render backend for a 2D system around how modern hardware actually works. You can't just slap an OpenGL backend into a toolkit that assumes it's using a software renderer.

        Comment


        • #14
          Originally posted by elanthis View Post
          There is far more to modern graphics than compositing. And even then, yes, it is more difficult. Not only is it two codepaths to write, test, and maintain; there's also a matter of enabling and disabling entire swathes of features based on the compositing backend, as naive software compositing is barely capable of handling straight up transparent "cutout", much less shadows, de-focus blur, transforms (including simple accessibility ones like maginification), and so on. So then there's another set of code that has to be written, tested, and maintained. If you want really fast compositing in software, you need to get a really fancy software graphics engine... like LLVMpipe. (Or WARP** for Windows/Direct3D... I'd like to see some benchmarks between WARP and LLVMpipe some time!)
          Any graphic toolkit today provide a software backend, and using it in a compositor is just a matter of putting window content into an image and let that toolkit do all the transformation it want with it. I have no idea what KDE and GNOME are doing, but for Enlightenment, we are able to do just that with a very small team. For much bigger project like KDE and GNOME that have the strong backing of company, I would be surprised if they don't have the same infra as we have. As soon as you are able to have a "Scene Graph" in your toolkit, any compositor can just use it. There is no special case from, cutout, shadow, transform are already part of what the toolkit is able to do. The compositor don't have any special code to maintain for that. The toolkit does the job for you.

          Originally posted by elanthis View Post
          All to support a tiny minority of users who haven't upgraded their machines in many years, or who are running oddball setups that aren't standard fair for a desktop stack anyway.
          Seriously, go out there. AMD provide shitty driver that make it barely usable with compositor. Nouveau is not in a good state. Intel has power management issue with some of their driver on i7 when used by a compositor. The less problematic one is latest NVidia (yeah, really the latest one, before you also have a lot of bug). The main reason is that people tend to focus on game for performance and benchmark, when providing a stable driver for compositing is more difficult to mesure. You may have the money to buy the hardware that work with your software, but sometimes it is nice to write software that people can use on their hardware.

          Originally posted by elanthis View Post
          That is not at all true. OpenGL has a lot of problems, but there's no reason it can't do standard 2D rendering faster than software. 2D rendering on the CPU is just (in simplified terms) a big loop over a 2D grid doing a lot of SIMD operations. That's _exactly_ what the GPU is designed to excel at, and OpenGL is just an API that exposes that hardware to software.
          Most of the time you have only a small part of the screen that get updated. Just where you cursor is and that's it. For that in OpenGL, there is currently no extention for making sure that the backbuffer will be preserved between two buffer swap, there is no way to know the number of actual buffer the driver is cycling with and there is no way to do partial update on a recycled buffer. This means most of the time you do a full screen redraw when you only need maybe a 30 pixels by 30 pixels block to update. So you are doing a full screen update using a lot of memory bandwidth (direct impact when you have an integrated GPU) instead of just pushing a few pixels. Guess what, software is way faster in that case. This kind of extention would be easier to add on top of wayland protocol, but for now, we don't have that and this is directly impacting performance. 2D rendering is not about just dumbly walking a grid. Even with OpenGL you don't do that. You are walking a list of object to render and you take decision on what to render, what not and when. That logic is very true for both backend and can in fact be completely shared.

          That's for the biggest cost. There is other case where OpenGL isn't as fast as a software implementation, but that just impact the size and complexity of the area you can update before the brute force of the GPU catch up.

          Originally posted by elanthis View Post
          not to mention the battery life benefits for mobile devices (letting your GPU do a quick burst of work is more efficient than having your CPU chug through it for a longer period of time).
          That completely depend of the amount of stuff you need to process, the smaller the area, the less benefit you have. Powering up the GPU means, you have another huge number of core that are consuming power. There is a threshold on when to use it and when not. If you really care about performance and power consumption, having an hybrid engine and being able to switch from GL to software make sense in some case.

          Originally posted by elanthis View Post
          The primary problem is a complete lack of batching (that is, many GUI toolkits do one or more draw calls per widget/control, where as it is possible to do a handful or even just one for the entire window) and a lack of understanding on how to use shaders to maximum effect. Even a lot of game-oriented GUI frameworks (for editor and such, or even ones designed for in-game HUDs and menus systems like ScaleForm) get this terribly wrong.
          This is definitively not true with EFL and I bet it is also the case with QT. That was maybe true 5 years ago, but not anymore. GUI toolkit that want to play nicely on embedded device have been forced to do that as soon as possible or they will just not been usable at all. Due to pressure to go into the embedded market, toolkit have put a lot of effort on power consumption and by side effect on performance (that's completely linked), and that space is rapidly changing and evolving. We are at a point today, where OpenGL API and X API is clearly an issue.

          Comment


          • #15
            Originally posted by zxy_thf View Post
            Yes, and the good intel HD GPU is not a part of Xeon E5 nor E7.
            Intel? Xeon? Processor X5550 just says: integrated graphics

            which is in an HP ProLiant DL360 G7 with release date: 2011-06-21

            But who runs a Linux Desktop on that?

            Comment


            • #16
              Originally posted by elanthis View Post
              There is far more to modern graphics than compositing. And even then, yes, it is more difficult. Not only is it two codepaths to write, test, and maintain; there's also a matter of enabling and disabling entire swathes of features based on the compositing backend, as naive software compositing is barely capable of handling straight up transparent "cutout", much less shadows, de-focus blur, transforms (including simple accessibility ones like maginification), and so on. So then there's another set of code that has to be written, tested, and maintained.
              Right and this is exactly why the design choice of both GNOME Shell and Unity to be a plugin for a single window manager is absolutely retarded and this is why KDE Plasma Desktop has the superior technology. KPD just sends hints to the WM ? transparency here, blur there. You get it.
              If the WM does not understand those hints, they'll be simply not displayed. If a Plasma theme has not been developed with that in mind, it can look awkward at times but it'll still work!
              KWin could remove all non-GL code right now and KPD would still work with OpenBox, TWM, etc.

              The most retarded party in that area is Canonical. They already had a Unity version (Unity2D) that was not a WM plugin. They could simply have said: ?Look, we develop Unity[2D] with Compiz in mind. If Compiz doesn?t work, it'll fall back to another WM. We don?t support this, we won?t write special fixes if Unity under Metacity will look strange at time, etc. It is just meant as a stopgap until you can install proper drivers.?
              But no, they decided to throw away the superior technology to concentrate on the plugin-based version.

              Comment


              • #17
                1- ARM needs a decent opensource graphic driver no matter what, this is a requirement and not just for DE's but in order for linux to suceed in ARM territory... I fail to see how can chrome os (ubuntu 10) be hardware accelerated on arm and so is android and there being no driver, open or bin to use on linux distros... this is pure stupidity.


                2- The linux desktop environments are A JOKE. compiz is a joke, 3d effects on a DE are a JOKE. woobly windows and cubes lube them and stick them up your ass.

                The linux desktop should be 1.clean 2. functional 3. professional looking, what you have now is a bunch of DE's that either look drawn by retards with crayons (KDE) or that are having an existential crisis and don't know what they are supposed to be (gnome).

                I was running lxde when I had that piece of shit HP with unsupported ati gfx AND guess what: now I have intel hd gfx that are really nicely supported, AND I'M STILL running lxde. Look at lubuntu 12.10 now improve it and THAT'S WHAT A LINUX DESKTOP SHOULD LOOK LIKE.


                3- This lvmpipe etc it's a joke... trust me I have here a bunch of old laptops with old ass mobile radeons. YOU THINK UNITY 2D OR GNOME FALLBACK WERE A GOOD EXPERIENCE?? trust me when I say this if you happen to have old ass radeons or any unsussported gfx card, lvmpipe or no lvmpipe, YOU AIN'T GONNA BE RUNNING ANY MODERN LINUX DISTRO

                unless you waste hours upon hours editing and messing with xorg.conf and shadowfb and noaccel and all that shit.


                this lvmpipe is sand in the eyes, throw a bone to see if they shut up, but the reality remains

                old hardware +modern linux distros = forget about it


                and I know more about this than all of yous

                Comment


                • #18
                  Originally posted by mark45 View Post
                  I didn't quite get it from the article: do modern servers ship with (very) old graphics cards like Matrox?
                  Not necessarily old matrox cards, but certainly some oddballs.
                  Most of the server boards I go for have things like Aspeed AST2050. These are very weak chips when it comes to graphics, but they have iKVM. Think of it as hardware-VNC. Basically, the machines can be unbootably fucked, and yet I can manipulate them remotely as if I were actually sitting right at them.... including, but not limited to, accessing the bios setup program.

                  There are no 3D drivers for the AST2050, nor would I want to have any kind of complex composited graphics environment running on servers I have to access over iKVM. The graphics load over the network would be a very VERY bad thing. Older non-composited UI's are fine, because they do the blink-on-off thing, which keeps the screen updates light. Fading in and out, and all the various animations are far too heavy for running over the network.

                  Comment


                  • #19
                    Originally posted by zxy_thf View Post
                    Yes, and the good intel HD GPU is not a part of Xeon E5 nor E7.
                    Also AMD has server CPU's without integrated GPU's.

                    Actually a lot of server boards use very basic IGP's still and won't use one of those 'Fusion-bridge' cores just so they can show a bios that is maybe seen once in its lifetime'. I bet google and it's 1mil. server won't care about the integrated GPU's.

                    Don't even say GPGPU, someone on this forum even said, that the IGP's are simply not powerful enough to do anything meaningful.

                    Comment


                    • #20
                      Originally posted by oliver View Post
                      Actually a lot of server boards use very basic IGP's still and won't use one of those 'Fusion-bridge' cores just so they can show a bios that is maybe seen once in its lifetime'. I bet google and it's 1mil. server won't care about the integrated GPU's.
                      Google cares about it enough to spend money on Coreboot (and not only because of chromebook IIRC, but because of the possibilities to have one remote interface for all their boards/servers)....

                      Comment

                      Working...
                      X