Announcement

Collapse
No announcement yet.

Not All Linux Users Want To Toke On LLVMpipe

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by elanthis View Post
    There is far more to modern graphics than compositing. And even then, yes, it is more difficult. Not only is it two codepaths to write, test, and maintain; there's also a matter of enabling and disabling entire swathes of features based on the compositing backend, as naive software compositing is barely capable of handling straight up transparent "cutout", much less shadows, de-focus blur, transforms (including simple accessibility ones like maginification), and so on. So then there's another set of code that has to be written, tested, and maintained.
    Right and this is exactly why the design choice of both GNOME Shell and Unity to be a plugin for a single window manager is absolutely retarded and this is why KDE Plasma Desktop has the superior technology. KPD just sends hints to the WM – transparency here, blur there. You get it.
    If the WM does not understand those hints, they'll be simply not displayed. If a Plasma theme has not been developed with that in mind, it can look awkward at times but it'll still work!
    KWin could remove all non-GL code right now and KPD would still work with OpenBox, TWM, etc.

    The most retarded party in that area is Canonical. They already had a Unity version (Unity2D) that was not a WM plugin. They could simply have said: “Look, we develop Unity[2D] with Compiz in mind. If Compiz doesn’t work, it'll fall back to another WM. We don’t support this, we won’t write special fixes if Unity under Metacity will look strange at time, etc. It is just meant as a stopgap until you can install proper drivers.”
    But no, they decided to throw away the superior technology to concentrate on the plugin-based version.

    Comment


    • #17
      1- ARM needs a decent opensource graphic driver no matter what, this is a requirement and not just for DE's but in order for linux to suceed in ARM territory... I fail to see how can chrome os (ubuntu 10) be hardware accelerated on arm and so is android and there being no driver, open or bin to use on linux distros... this is pure stupidity.


      2- The linux desktop environments are A JOKE. compiz is a joke, 3d effects on a DE are a JOKE. woobly windows and cubes lube them and stick them up your ass.

      The linux desktop should be 1.clean 2. functional 3. professional looking, what you have now is a bunch of DE's that either look drawn by retards with crayons (KDE) or that are having an existential crisis and don't know what they are supposed to be (gnome).

      I was running lxde when I had that piece of shit HP with unsupported ati gfx AND guess what: now I have intel hd gfx that are really nicely supported, AND I'M STILL running lxde. Look at lubuntu 12.10 now improve it and THAT'S WHAT A LINUX DESKTOP SHOULD LOOK LIKE.


      3- This lvmpipe etc it's a joke... trust me I have here a bunch of old laptops with old ass mobile radeons. YOU THINK UNITY 2D OR GNOME FALLBACK WERE A GOOD EXPERIENCE?? trust me when I say this if you happen to have old ass radeons or any unsussported gfx card, lvmpipe or no lvmpipe, YOU AIN'T GONNA BE RUNNING ANY MODERN LINUX DISTRO

      unless you waste hours upon hours editing and messing with xorg.conf and shadowfb and noaccel and all that shit.


      this lvmpipe is sand in the eyes, throw a bone to see if they shut up, but the reality remains

      old hardware +modern linux distros = forget about it


      and I know more about this than all of yous

      Comment


      • #18
        Originally posted by mark45 View Post
        I didn't quite get it from the article: do modern servers ship with (very) old graphics cards like Matrox?
        Not necessarily old matrox cards, but certainly some oddballs.
        Most of the server boards I go for have things like Aspeed AST2050. These are very weak chips when it comes to graphics, but they have iKVM. Think of it as hardware-VNC. Basically, the machines can be unbootably fucked, and yet I can manipulate them remotely as if I were actually sitting right at them.... including, but not limited to, accessing the bios setup program.

        There are no 3D drivers for the AST2050, nor would I want to have any kind of complex composited graphics environment running on servers I have to access over iKVM. The graphics load over the network would be a very VERY bad thing. Older non-composited UI's are fine, because they do the blink-on-off thing, which keeps the screen updates light. Fading in and out, and all the various animations are far too heavy for running over the network.

        Comment


        • #19
          Originally posted by zxy_thf View Post
          Yes, and the good intel HD GPU is not a part of Xeon E5 nor E7.
          Also AMD has server CPU's without integrated GPU's.

          Actually a lot of server boards use very basic IGP's still and won't use one of those 'Fusion-bridge' cores just so they can show a bios that is maybe seen once in its lifetime'. I bet google and it's 1mil. server won't care about the integrated GPU's.

          Don't even say GPGPU, someone on this forum even said, that the IGP's are simply not powerful enough to do anything meaningful.

          Comment


          • #20
            Originally posted by oliver View Post
            Actually a lot of server boards use very basic IGP's still and won't use one of those 'Fusion-bridge' cores just so they can show a bios that is maybe seen once in its lifetime'. I bet google and it's 1mil. server won't care about the integrated GPU's.
            Google cares about it enough to spend money on Coreboot (and not only because of chromebook IIRC, but because of the possibilities to have one remote interface for all their boards/servers)....

            Comment


            • #21
              Originally posted by elanthis View Post
              There is far more to modern graphics than compositing. And even then, yes, it is more difficult. Not only is it two codepaths to write, test, and maintain; there's also a matter of enabling and disabling entire swathes of features based on the compositing backend, as naive software compositing is barely capable of handling straight up transparent "cutout", much less shadows, de-focus blur, transforms (including simple accessibility ones like maginification), and so on. So then there's another set of code that has to be written, tested, and maintained. If you want really fast compositing in software, you need to get a really fancy software graphics engine... like LLVMpipe. (Or WARP** for Windows/Direct3D... I'd like to see some benchmarks between WARP and LLVMpipe some time!)

              All to support a tiny minority of users who haven't upgraded their machines in many years, or who are running oddball setups that aren't standard fair for a desktop stack anyway.

              ** On the note of WARP, it's a very interesting design. It provides a very high performance fully D3D11-compliant software rasterizer. It was designed to mimic the performance of real hardware in such a way that you can test your app's performance characteristics on WARP and have it translate well to how it'll scale on real hardware (that is, find the bottlenecks of your app; obviously you'd expect the app to run faster overall on real hardware, but you shouldn't expect that a call that took 2% of the render time on WARP will suddenly take 10% of the render time on hardware). It's also being extended/reused in the upcoming enhanced PIX tools from Microsoft, giving you a way to step through and inspect shader code in your unmodified app, without needing specialized hardware setups (like NVIDIA's tools or the tools for the XBox360 ATI graphics chip) to do it. That's the kind of developer-oriented tools that the FOSS graphics stack should be focusing on ASAP; Linux's biggest draw on the desktop is as a developer machine, but yet it seems its developer tools are lagging further and further behind. Vim and GDB aren't exactly the cutting edge of developer tools anymore.



              That is not at all true. OpenGL has a lot of problems, but there's no reason it can't do standard 2D rendering faster than software. 2D rendering on the CPU is just (in simplified terms) a big loop over a 2D grid doing a lot of SIMD operations. That's _exactly_ what the GPU is designed to excel at, and OpenGL is just an API that exposes that hardware to software.

              That most of the FOSS drivers have major problems and that precious few FOSS desktop developers have the first clue how to actually use OpenGL properly to do fast 2D, that is possibly quite true. The vast majority of GUI toolkits and higher level 2D rendering APIs are written like it's still the 1990's and don't actually have a render backend that's capable of working the way a GPU needs them to work. It's entirely 100% possible to make one, though, and the speed benefits can be _massive_, not to mention the battery life benefits for mobile devices (letting your GPU do a quick burst of work is more efficient than having your CPU chug through it for a longer period of time).

              The primary problem is a complete lack of batching (that is, many GUI toolkits do one or more draw calls per widget/control, where as it is possible to do a handful or even just one for the entire window) and a lack of understanding on how to use shaders to maximum effect. Even a lot of game-oriented GUI frameworks (for editor and such, or even ones designed for in-game HUDs and menus systems like ScaleForm) get this terribly wrong.

              You have to design your render backend for a 2D system around how modern hardware actually works. You can't just slap an OpenGL backend into a toolkit that assumes it's using a software renderer.
              From what I've read OpenGL actually isn't a good fit for efficient 2d rendering that isn't a polygon. Apparently Mac developed a QuartzExtreme2d backend (now called QuartzGL) awhile ago but never got it working well enough to make default. So, they use the cpu like everyone else who isn't windows, or happens to have a blitter.
              http://www.cocoawithlove.com/2011/03...-graphics.html
              So, unless you want to extend your complaints of FOSS cluelessness about graphics hardware to Apple, you might want to say, rather, that Windows created an accelerated api that seems to be uniquely able to handle 2d.

              Comment


              • #22
                For 2D drawing, would perhaps an API like OpenVG make sense more? The whole idea of Gallium3D was that you had different state trackers like GL/VG/D3D/XRender...

                Could mesa not just make Direct2D or something like it as a state tracker? Wayland is essentially EGL under the hood which was designed for sharing buffers between APIs (EGLImage) so why not use a desired API and not OpenGL? I'm guessing it is because intel/nvidia/amd's drivers don't support anything but GL...

                About updating the whole screen with GL, I thought glScissor or the stencil buffer or the zbuffer could be used to reject updating of pixels/fragments, can't llvmpipe use this to prevent overdraw? Does llvmpipe share pointers around via EGLImage or does it do texture_from_pixmap aka expensive copies?

                -Alex

                Comment


                • #23
                  Originally posted by MistaED View Post
                  About updating the whole screen with GL, I thought glScissor or the stencil buffer or the zbuffer could be used to reject updating of pixels/fragments, can't llvmpipe use this to prevent overdraw?
                  glScissor is not the issue here (it is part of the solution). But the problem is that you have no way to guarantee that the previous buffer did survive and you don't know what pixels are in. You need to know if you are cycling between n buffers, what the value of n is. You also need to force the driver to not discard buffer (some driver, like intel, like to do that if you don't push an update for some time). When you have that, you now just need to be able to push a partial update in a generic way (so every apps will push there partial update even if they use OpenGL and it will propagated up to the frame buffer just fine).

                  Another optimisation that is possible when you handle buffer, like Wayland does, is automatic video layer handling, where depending on if the application window is on top of everything or not, its buffer are directly put in a video overlay, removing the need to turn on any GPU at all in the compositor. With all of that you are good to outperform every one. Only problem is that Wayland is still only a toy, it needs to be integrated into real life compositor to become usable as a desktop replacement. That's not going to happen any time soon. If we are lucky in one year some distro will start to pick it, but there is still a lot to do.

                  Comment


                  • #24
                    Business

                    The situation is coming to a head.

                    The only way you get users is when the other guys don't support them.

                    Jesus did this a lot. He hung out with the riff-raff.

                    If Linux wants to be the second coming it better start humbling itself.

                    Personally, I'm going Apple. New computer cost me $1,200 back in 1986. Tandy.

                    I don't mind paying 1986 prices for something I can navigate around smoothly.

                    Linux has become a chunky monkey.

                    You get one chance to impress the world. Rough starts are embarrassing.... Ask Bill Gates

                    Comment


                    • #25
                      Originally posted by Xake View Post
                      Google cares about it enough to spend money on Coreboot (and not only because of chromebook IIRC, but because of the possibilities to have one remote interface for all their boards/servers)....
                      Something like integrated Lights Out? No need for Coreboot there...

                      Comment


                      • #26
                        Originally posted by MistaED View Post
                        Wayland is essentially EGL under the hood which was designed for sharing buffers between APIs (EGLImage) so why not use a desired API and not OpenGL? I'm guessing it is because intel/nvidia/amd's drivers don't support anything but GL...
                        IIRC this is correct. 2D capable hardware was dropped cause 3D hardware got fast enough to supersede it (why castrate the 3D part of the chip for something slower than it?). But don't take my words as guaranteed. I'm sure Bridgman knows best.

                        Comment


                        • #27
                          Originally posted by squirrl View Post
                          If Linux wants to be the second coming it better start humbling itself.
                          Personally, I'm going Apple.
                          Are you missing sarcasm tags?

                          Originally posted by squirrl View Post
                          Personally, I'm going Apple. New computer cost me $1,200 back in 1986. Tandy.
                          I don't mind paying 1986 prices for something I can navigate around smoothly.
                          I suggest you learn about inflation. Plus, comparing a modern technology to a 25 year old one, instead of it's modern peers is err... dumb.

                          Comment


                          • #28
                            Originally posted by TAXI View Post
                            IIRC this is correct. 2D capable hardware was dropped cause 3D hardware got fast enough to supersede it (why castrate the 3D part of the chip for something slower than it?). But don't take my words as guaranteed. I'm sure Bridgman knows best.
                            Yep, it's certainly fast enough. It does lose in other aspects if you're doing 2d, such as being a way huge chip that sucks two or three orders of magnitude more watts than a 2d chip

                            Video decoding is also better on dedi hw than a general purpose 3d chip.

                            Comment

                            Working...
                            X