Announcement

Collapse
No announcement yet.

Gallium3D's LLVMpipe Is Much Faster With Mesa 9.2

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Gallium3D's LLVMpipe Is Much Faster With Mesa 9.2

    Phoronix: Gallium3D's LLVMpipe Is Much Faster With Mesa 9.2

    This morning I posted new Radeon Gallium3D - Mesa 9.1 vs. Mesa 9.2 benchmarks, which showed the upcoming Mesa release performing nicely for AMD APU graphics. However, what is the performance like the software-based LLVMpipe driver that is commonly being used in fallback situations where there is no GPU hardware driver available? It's generally a lot faster now for handling OpenGL...

    http://www.phoronix.com/vr.php?view=MTQwNzY

  • #2
    Very nice work! Those are some impressive speedups.
    Free Software Developer .:. Mesa and Xorg
    Opinions expressed in these forum posts are my own.

    Comment


    • #3
      Now if only Michael would do a decent job of looking into why this speed improved rather than just posting some meaningless benchmarks that'd be great.

      Then again, that would be expecting him to do work which much be a scary thought.

      Comment


      • #4
        Originally posted by Jonimus View Post
        Now if only Michael would do a decent job of looking into why this speed improved rather than just posting some meaningless benchmarks that'd be great.

        Then again, that would be expecting him to do work which much be a scary thought.
        Anyone can simply reproduce the tests I have done and investigate further. Everything is automated down to the performance Git bisecting. But I don't have the time to run all of that when I have other things to benchmark and other work.
        Michael Larabel
        http://www.michaellarabel.com/

        Comment


        • #5
          It would be nice if this would also mean a noticeable performance boost when using kvm virtualization, since 3d spice is still a long way down the line

          Comment


          • #6
            Originally posted by Jonimus View Post
            Now if only Michael would do a decent job of looking into why this speed improved rather than just posting some meaningless benchmarks that'd be great.

            Then again, that would be expecting him to do work which much be a scary thought.
            I'd start with studying the LLVM Pipeline and R600 Target branch. You'll have your answers with the quality of architecture that is throughout LLVM/Clang/LLDB versus the legacy architecture of GCC.

            Comment


            • #7
              will the new kaveri APUs have the same numbers for LLVM-Pipe as for OpenGL then? or is it still too far away, or actually not wanted..? (thinking of hUMA memory architecture)
              Does anyone know anything?

              Comment


              • #8
                Any room left for more improvements?

                Nice improvements with many benchmarks now twice as fast!

                Is there any room left for further improvements to LLVMpipe?
                What is LLVMpipe missing? What could be better? How could it be better?

                Comment


                • #9
                  Originally posted by jakubo View Post
                  will the new kaveri APUs have the same numbers for LLVM-Pipe as for OpenGL then? or is it still too far away, or actually not wanted..? (thinking of hUMA memory architecture)
                  Does anyone know anything?
                  Guessing you mean "OpenGL on llvmpipe (effectively a graphics driver for CPU) on the Kaveri CPU vs OpenGL on radeonsi (graphics driver for GPU) on the kaveri GPU" ? If so then the GPU path would still be much faster because of the inherent performance difference between GPU and CPU (think 50:1 when doing work that is a good fit for highly parallel hardware).

                  The big deal with HUMA is the ability for CPU and GPU to share virtual memory so that applications which *don't* fit cleanly onto highly parallel hardware can make use of both CPU and GPU without the usual overheads.

                  One obvious question is "what if llvmpipe were ported to run on the GPU ?". It still wouldn't be as fast as using the graphics pipes because a GPU still has a number of highly optimized fixed function blocks (texture engines, depth/colour operations, rasterizers etc..) and because llvmpipe probably wouldn't scale well to the hundreds or thousands of threads a GPU thrives on.

                  Comment


                  • #10
                    Originally posted by bridgman View Post
                    Guessing you mean "OpenGL on llvmpipe (effectively a graphics driver for CPU) on the Kaveri CPU vs OpenGL on radeonsi (graphics driver for GPU) on the kaveri GPU" ? If so then the GPU path would still be much faster because of the inherent performance difference between GPU and CPU (think 50:1 when doing work that is a good fit for highly parallel hardware).

                    The big deal with HUMA is the ability for CPU and GPU to share virtual memory so that applications which *don't* fit cleanly onto highly parallel hardware can make use of both CPU and GPU without the usual overheads.

                    One obvious question is "what if llvmpipe were ported to run on the GPU ?". It still wouldn't be as fast as using the graphics pipes because a GPU still has a number of highly optimized fixed function blocks (texture engines, depth/colour operations, rasterizers etc..) and because llvmpipe probably wouldn't scale well to the hundreds or thousands of threads a GPU thrives on.
                    so if LLVM would be mature enough to take advantage of those highly optimized fixed functions. would there be a difference? wouldnt OpenCL be obsolete?

                    Comment

                    Working...
                    X