Announcement

Collapse
No announcement yet.

Intel Core i3 LLVMpipe Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Core i3 LLVMpipe Performance

    Phoronix: Intel Core i3 LLVMpipe Performance

    Last week I put out new numbers showing the LLVMpipe performance with the latest Gallium3D code found in Mesa 7.9-devel. This Gallium3D driver accelerates all operations on the CPU rather than a GPU as a better software rasterizer than what is currently available for Linux, but even with a hefty Intel Core i7 CPU the OpenGL acceleration was still quite slow. In this article using an Intel Core i3 mobile CPU we are looking at the LLVMpipe performance again, but this time comparing it to the Intel graphics performance and also looking at the impact that the clock frequency and Hyper Threading have on this Gallium3D driver that heavily utilizes the Low-Level Virtual Machine for its CPU optimizations.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Hello. Please help me understanding (i'm new to linux graphic drivers)..
    What is the difference in definitions between Mesa, Gallium & LLVMpipe. I don't understand how they relate to each other.

    Thanks in advanced!

    Comment


    • #3
      if i understand it correctly, its like this:

      mesa is the old classic linux driver (lacking performance optimisations in many cases)

      Gallium3D is a layer thats somehow pushed in between the kernel and the driver itelf, which provides some kind of environment for the specia?l gallium drivers. its an attempt to generalise graphics drivers (somehow).


      LLVMpipe is a kind of Software Rasteriser (no hardware acceleration by the GPU) which is being developed on the Gallium3D infrastructure using LLVM (Low Level Virtual Machine) which somehow has some pluses in Linking optimisations while compiling the driver (or even recompiling for optimisations while at play???)

      well, correct me if im wrong...

      how can it actually be that the framerate doesnt jump to twice the framerate as without HT when the HT is being reactivated? (@2.60GHz)
      is it some kind of bottleneck or overhead or is the state of the driver not as finished as it seemed to be to me?
      the game itself isnt using multiple cores i guess, but does LLVMpipe really do?

      Comment


      • #4
        Originally posted by jakubo View Post
        how can it actually be that the framerate doesnt jump to twice the framerate as without HT when the HT is being reactivated? (@2.60GHz)
        is it some kind of bottleneck or overhead or is the state of the driver not as finished as it seemed to be to me?
        the game itself isnt using multiple cores i guess, but does LLVMpipe really do?
        Why should it? It's not like you get 4 more physical cores. Even in Intel's PR material best-case-scenarios HT is only something like 30% boost.

        Comment


        • #5
          Originally posted by phoronix View Post
          Phoronix: Intel Core i3 LLVMpipe Performance

          <snip>This Gallium3D driver accelerates all operations on the CPU rather than a GPU as a better software rasterizer than what is currently available for Linux, but even with a hefty Intel Core i7 CPU the OpenGL acceleration was still quite slow. In this article using an Intel Core i3 mobile CPU we are looking at the LLVMpipe performance again, but this time comparing it to the Intel graphics performance and also looking at the impact that the clock frequency and Hyper Threading have on this Gallium3D driver that heavily utilizes the Low-Level Virtual Machine for its CPU optimizations.
          I think it's inappropriate to call software-rendering LLVMpipe "accelerated". Acceleration in graphics context means hardware acceleration, that is, use of hardware that is dedicated to increase graphics performance. If you render using the CPU you're using the non-accelerated way of drawing graphics. That is not say that non-accelerated couldn't be faster than "accelerated". First "3D accelerators" were notorious for being slower than drawing things with just CPU. :-) To get real feeling of performance of LLVMpipe it should be compared with classic Mesa software renderer.

          PS. For neophytes that are wondering why bother with software-rendering at all, LLVMpipe's real importance is that it's a prototype for GPU acceleration. LLVM can be adapted to compile for GPUs and thus get the most out of GPU-driven architectures (after we first get LLVMpipe work, and get LLVM to compile for GPUs.) Also, Brian Paul's Mesa has been a software reference for proper OpenGL. So you could verify that your hardware driver works correctly if it produces the same output as software Mesa.

          Comment


          • #6
            Originally posted by AlexZaim View Post
            Hello. Please help me understanding (i'm new to linux graphic drivers)..
            What is the difference in definitions between Mesa, Gallium & LLVMpipe. I don't understand how they relate to each other.

            Thanks in advanced!
            Mesa is a software library that is an unoficial implementation of OpenGL. In Mesa there are also drivers for accelerating 3D rendering with graphics cards.

            There are two types of driver inside Mesa:
            -Classic
            -Gallium3D

            Gallium3D is a new kind of architecture for writing drivers to take advantage of modern graphic card architectures.

            Gallium3D is a bit of a mindfuck, but brilliant at the same time:
            All Gallium3D drivers have one purpose: Expose an API. All graphics cards out there with Gallium3D drivers expose the same API.

            On top of this API features are written. So when somebody creates OpenGL on top of this API than suddenly all graphics cards support it. So when ATI/AMD would create an OpenGL or a DirectX program on top of this API than suddenly nVidia cards would suddenly also have OpenGL and vice versa.

            Now Mesa without any drivers can also render 3D, but painfully slow. This is called the Mesa softpipe renderer. Enter LLVMpipe. LLVMpipe is a much faster software only driver that uses the LLVM compiler collection. Part of the LLVM compiler collection is a JIT compiler, that is an on-the-fly compiler that optimises everything before it is being rendered to the performance of the software only rendering is much, much faster.

            The rest you can Wikipedia for details

            Comment


            • #7
              Hey that was actually a Test on Phoronix which was quite good.
              No endless rowing of charts and more trying to interpret things.

              Also, I don't know why phoronix feels the urge to release soooo many tests. Most good testsites keep the numbers down and quality up.
              You guys can just keep it low a bit and take your time making good tests.

              But one thing I wanna know is, why differ the memory usage numbers between different clockrates so much?

              Comment


              • #8
                don't understand

                I don't understand why you're comparing a software rasterizer to one in which there is hardware acceleration. It's not an apples to apples comparison. There shouldn't be any relation between the two.

                LLVMPipe over time is what's the most interesting. That and comparisons with classic mesa software rendering.

                Comment


                • #9
                  I think it's just to give a meaningful point of reference...

                  ... and maybe a response to all the people saying that software rasterizers will stomp low end GPUs into the dirt
                  Test signature

                  Comment


                  • #10
                    Originally posted by bridgman View Post
                    ... and maybe a response to all the people saying that software rasterizers will stomp low end GPUs into the dirt
                    Yeah you realy like that, don't you

                    Comment

                    Working...
                    X