Announcement

Collapse
No announcement yet.

LLVMpipe: OpenGL With Gallium3D on Your CPU

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • LLVMpipe: OpenGL With Gallium3D on Your CPU

    Phoronix: LLVMpipe: OpenGL With Gallium3D on Your CPU

    The software rasterizer used in Mesa that allows for software acceleration of OpenGL on the CPU without any assistance from the graphics processor has largely been useless. Even with a modern-day, multi-core processor, the performance of Mesa's software rasterizer has been abysmal. The performance of Mesa classic DRI drivers have traditionally been poor anyways compared to the high-performance, proprietary NVIDIA/ATI graphics drivers, but when dealing with just the software rasterizer there really aren't any games or applications that run well. Fortunately, software acceleration on Gallium3D is very much a different story thanks to LLVM.

    http://www.phoronix.com/vr.php?view=14871

  • #2
    r300g much faster than one month ago

    What I think is most interesting with these benchmark results is that the r300g driver seems to be A LOT faster than just a month ago. Check the graph for Open Arena in this article. Its about twice as fast with the same graphics card while the classic driver shows the same numbers. And as you can see in this new article the numbers are not at all cpu limited, so the weaker cpu in the older test should not be a factor.

    Looks really promising.

    Comment


    • #3
      Now this is cool!

      Perhaps even cooler would be to use llvm-pipe to extend a graphics cards capabilities (eg. extensions the videocard doesn't come with) or to work together with the card's GPU to balance the workload.

      Would that work? Could you, say, include some of llvm-pipe inside r300 where it is needed most?

      Comment


      • #4
        That may work, but is slower than pure software rendering, because both GPU and CPU have their own memory. Copying buffers between those two is too expensive to be worthwile.

        You could have a multiscreen-setup where one screen is software-rendered, then join both via Xinerama. But it'll likely be slower than a fully GPU-accelerated solution.


        there's been some discussion about mixed hard- and software rendering in here.

        Comment


        • #5
          Originally posted by rohcQaH View Post
          That may work, but is slower than pure software rendering, because both GPU and CPU have their own memory. Copying buffers between those two is too expensive to be worthwile.

          You could have a multiscreen-setup where one screen is software-rendered, then join both via Xinerama. But it'll likely be slower than a fully GPU-accelerated solution.


          there's been some discussion about mixed hard- and software rendering in here.
          How about motherboard video that shares system RAM? It would be a major hack but maybe the software renderer could jump into the motherboard video ram and operate on it directly.

          Comment


          • #6
            Originally posted by frantaylor View Post
            How about motherboard video that shares system RAM? It would be a major hack but maybe the software renderer could jump into the motherboard video ram and operate on it directly.
            Shared Video is uncached memory so reading from it is SLOW (unless the movntdqa instruction is used but it a SSE4 instruction which is not available on all CPUs).

            Comment


            • #7
              Simple question: Good enough to run KDE 4.4 desktop effects with it on a Phenom 9950 X4 (not OC'd)?

              That is all I would like to have for now... And while I of course now do have mesa, how come Kwin does not want to run desktop effects? Wierd...

              Comment


              • #8
                Originally posted by V!NCENT View Post
                Simple question: Good enough to run KDE 4.4 desktop effects with it on a Phenom 9950 X4 (not OC'd)?

                That is all I would like to have for now... And while I of course now do have mesa, how come Kwin does not want to run desktop effects? Wierd...
                Ah I see: OpenGL + fallback and disable functionality checks... But hell with the standard mesa rasterizer it is 0.01 fps or something

                Comment


                • #9
                  Mmmh, I'd like to see what llvmpipe will do with future Fusion and Sandy Bridge architectures. Those will offer much more execution units for such jobs.

                  Comment


                  • #10
                    Originally posted by V!NCENT View Post
                    Simple question: Good enough to run KDE 4.4 desktop effects with it on a Phenom 9950 X4 (not OC'd)?
                    probably, but the simpler XRender based composition mode may be more suitable for software rendering.

                    Are you planning to run a headless machine with a vnc server or something? Otherwise anything that has a DVI port is capable of faster compositing.. they stopped selling those framebuffer-on-a-stick-devices 20 years ago.

                    Don't forget: with GPU acceleration, you can have cool 3D-effects AND run an application, too!

                    Comment


                    • #11
                      More test programs: Super Maryo Chronicles

                      My kid loves playing SMC (Super Maryo Chronicles) on my computer, so he'll use 'switch user' to get his own login on there. Unfortunately, the performance is so sucky, that he's given up on it for now.

                      And I'm not willing to let him login to my account either!

                      So, my question is, how well does the r300g driver work with an X1650, AMD Athlon X2 5200, 4gb RAM, 1280x1024 display? I'm also running Ubuntu 10.04 with the xorg-edgers repository for my MESA and other drivers.

                      I'd love to try the r300g driver as well and see how the performance is.

                      Would it be possible to include games like SMC into your mix as well? It's OpenGL, but a different sort of stress compared to the others.

                      Thanks,
                      John

                      Comment


                      • #12
                        Originally posted by Sacha View Post
                        Would that work? Could you, say, include some of llvm-pipe inside r300 where it is needed most?
                        Most of the performance achieved comes from minimizing the number of synchronizations between a GPU and a CPU (kudos to airlied). What you propose is the exact opposite, the outcome is obvious...

                        [QUOTE=l8gravely;125187]So, my question is, how well does the r300g driver work with an X1650, AMD Athlon X2 5200, 4gb RAM, 1280x1024 display?/QUOTE]
                        Not sure, it depends on the game/app you want to run. Some apps work, some others don't. We usually try to fix bugs as we come across them or get told about them. The worst bug is the one we don't know about...

                        Comment


                        • #13
                          Can this be used to accelerate graphics when the GPU doesn't have Hardware T&L, like, for example, my Radeon Xpress 1100?

                          I've been wondering for years why graphics performance on here has been sucky, until I found out that little bit. <rant>If it doesn't do T&L, it's not a Radeon!</rant>

                          Comment


                          • #14
                            Originally posted by aaaantoine View Post
                            Can this be used to accelerate graphics when the GPU doesn't have Hardware T&L, like, for example, my Radeon Xpress 1100?
                            Yes, this is even the plan for r300g i.e. using LLVM for T&L (the code is already wired up, we just need to fix a few bugs and then the whole thing can take off) and the rest (i.e. fragment processing). I think this approach might still beat llvmpipe in performance.

                            Comment


                            • #15
                              @roc:
                              Well... let's say... my evergreen is now a 150 framebuffer with the FLOSS driver stack x'D

                              And KDE 4.4.x looks so... shiny with desktop effects and my Phenom 9950 X4 should be able to drag it...

                              Comment

                              Working...
                              X