Announcement

Collapse
No announcement yet.

Not All Linux Users Want To Toke On LLVMpipe

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by elanthis View Post
    There is far more to modern graphics than compositing. And even then, yes, it is more difficult. Not only is it two codepaths to write, test, and maintain; there's also a matter of enabling and disabling entire swathes of features based on the compositing backend, as naive software compositing is barely capable of handling straight up transparent "cutout", much less shadows, de-focus blur, transforms (including simple accessibility ones like maginification), and so on. So then there's another set of code that has to be written, tested, and maintained. If you want really fast compositing in software, you need to get a really fancy software graphics engine... like LLVMpipe. (Or WARP** for Windows/Direct3D... I'd like to see some benchmarks between WARP and LLVMpipe some time!)

    All to support a tiny minority of users who haven't upgraded their machines in many years, or who are running oddball setups that aren't standard fair for a desktop stack anyway.

    ** On the note of WARP, it's a very interesting design. It provides a very high performance fully D3D11-compliant software rasterizer. It was designed to mimic the performance of real hardware in such a way that you can test your app's performance characteristics on WARP and have it translate well to how it'll scale on real hardware (that is, find the bottlenecks of your app; obviously you'd expect the app to run faster overall on real hardware, but you shouldn't expect that a call that took 2% of the render time on WARP will suddenly take 10% of the render time on hardware). It's also being extended/reused in the upcoming enhanced PIX tools from Microsoft, giving you a way to step through and inspect shader code in your unmodified app, without needing specialized hardware setups (like NVIDIA's tools or the tools for the XBox360 ATI graphics chip) to do it. That's the kind of developer-oriented tools that the FOSS graphics stack should be focusing on ASAP; Linux's biggest draw on the desktop is as a developer machine, but yet it seems its developer tools are lagging further and further behind. Vim and GDB aren't exactly the cutting edge of developer tools anymore.



    That is not at all true. OpenGL has a lot of problems, but there's no reason it can't do standard 2D rendering faster than software. 2D rendering on the CPU is just (in simplified terms) a big loop over a 2D grid doing a lot of SIMD operations. That's _exactly_ what the GPU is designed to excel at, and OpenGL is just an API that exposes that hardware to software.

    That most of the FOSS drivers have major problems and that precious few FOSS desktop developers have the first clue how to actually use OpenGL properly to do fast 2D, that is possibly quite true. The vast majority of GUI toolkits and higher level 2D rendering APIs are written like it's still the 1990's and don't actually have a render backend that's capable of working the way a GPU needs them to work. It's entirely 100% possible to make one, though, and the speed benefits can be _massive_, not to mention the battery life benefits for mobile devices (letting your GPU do a quick burst of work is more efficient than having your CPU chug through it for a longer period of time).

    The primary problem is a complete lack of batching (that is, many GUI toolkits do one or more draw calls per widget/control, where as it is possible to do a handful or even just one for the entire window) and a lack of understanding on how to use shaders to maximum effect. Even a lot of game-oriented GUI frameworks (for editor and such, or even ones designed for in-game HUDs and menus systems like ScaleForm) get this terribly wrong.

    You have to design your render backend for a 2D system around how modern hardware actually works. You can't just slap an OpenGL backend into a toolkit that assumes it's using a software renderer.
    From what I've read OpenGL actually isn't a good fit for efficient 2d rendering that isn't a polygon. Apparently Mac developed a QuartzExtreme2d backend (now called QuartzGL) awhile ago but never got it working well enough to make default. So, they use the cpu like everyone else who isn't windows, or happens to have a blitter.
    QuartzGL was introduced as an official feature in Mac OS X 10.5 Leopard (although it was a developer-only feature in Mac OS X 10.4 as Quartz 2D Extreme). However, it is off by default and is largely ignored by most developers. In this post, I look at how to enable QuartzGL, the performance impact it has on different kinds of drawing and whether you should use it in your Mac programs.

    So, unless you want to extend your complaints of FOSS cluelessness about graphics hardware to Apple, you might want to say, rather, that Windows created an accelerated api that seems to be uniquely able to handle 2d.

    Comment


    • #22
      For 2D drawing, would perhaps an API like OpenVG make sense more? The whole idea of Gallium3D was that you had different state trackers like GL/VG/D3D/XRender...

      Could mesa not just make Direct2D or something like it as a state tracker? Wayland is essentially EGL under the hood which was designed for sharing buffers between APIs (EGLImage) so why not use a desired API and not OpenGL? I'm guessing it is because intel/nvidia/amd's drivers don't support anything but GL...

      About updating the whole screen with GL, I thought glScissor or the stencil buffer or the zbuffer could be used to reject updating of pixels/fragments, can't llvmpipe use this to prevent overdraw? Does llvmpipe share pointers around via EGLImage or does it do texture_from_pixmap aka expensive copies?

      -Alex

      Comment


      • #23
        Originally posted by MistaED View Post
        About updating the whole screen with GL, I thought glScissor or the stencil buffer or the zbuffer could be used to reject updating of pixels/fragments, can't llvmpipe use this to prevent overdraw?
        glScissor is not the issue here (it is part of the solution). But the problem is that you have no way to guarantee that the previous buffer did survive and you don't know what pixels are in. You need to know if you are cycling between n buffers, what the value of n is. You also need to force the driver to not discard buffer (some driver, like intel, like to do that if you don't push an update for some time). When you have that, you now just need to be able to push a partial update in a generic way (so every apps will push there partial update even if they use OpenGL and it will propagated up to the frame buffer just fine).

        Another optimisation that is possible when you handle buffer, like Wayland does, is automatic video layer handling, where depending on if the application window is on top of everything or not, its buffer are directly put in a video overlay, removing the need to turn on any GPU at all in the compositor. With all of that you are good to outperform every one. Only problem is that Wayland is still only a toy, it needs to be integrated into real life compositor to become usable as a desktop replacement. That's not going to happen any time soon. If we are lucky in one year some distro will start to pick it, but there is still a lot to do.

        Comment


        • #24
          Business

          The situation is coming to a head.

          The only way you get users is when the other guys don't support them.

          Jesus did this a lot. He hung out with the riff-raff.

          If Linux wants to be the second coming it better start humbling itself.

          Personally, I'm going Apple. New computer cost me $1,200 back in 1986. Tandy.

          I don't mind paying 1986 prices for something I can navigate around smoothly.

          Linux has become a chunky monkey.

          You get one chance to impress the world. Rough starts are embarrassing.... Ask Bill Gates

          Comment


          • #25
            Originally posted by Xake View Post
            Google cares about it enough to spend money on Coreboot (and not only because of chromebook IIRC, but because of the possibilities to have one remote interface for all their boards/servers)....
            Something like integrated Lights Out? No need for Coreboot there...

            Comment


            • #26
              Originally posted by MistaED View Post
              Wayland is essentially EGL under the hood which was designed for sharing buffers between APIs (EGLImage) so why not use a desired API and not OpenGL? I'm guessing it is because intel/nvidia/amd's drivers don't support anything but GL...
              IIRC this is correct. 2D capable hardware was dropped cause 3D hardware got fast enough to supersede it (why castrate the 3D part of the chip for something slower than it?). But don't take my words as guaranteed. I'm sure Bridgman knows best.

              Comment


              • #27
                Originally posted by squirrl View Post
                If Linux wants to be the second coming it better start humbling itself.
                Personally, I'm going Apple.
                Are you missing sarcasm tags?

                Originally posted by squirrl View Post
                Personally, I'm going Apple. New computer cost me $1,200 back in 1986. Tandy.
                I don't mind paying 1986 prices for something I can navigate around smoothly.
                I suggest you learn about inflation. Plus, comparing a modern technology to a 25 year old one, instead of it's modern peers is err... dumb.

                Comment


                • #28
                  Originally posted by TAXI View Post
                  IIRC this is correct. 2D capable hardware was dropped cause 3D hardware got fast enough to supersede it (why castrate the 3D part of the chip for something slower than it?). But don't take my words as guaranteed. I'm sure Bridgman knows best.
                  Yep, it's certainly fast enough. It does lose in other aspects if you're doing 2d, such as being a way huge chip that sucks two or three orders of magnitude more watts than a 2d chip

                  Video decoding is also better on dedi hw than a general purpose 3d chip.

                  Comment

                  Working...
                  X