Intel Core i3 LLVMpipe Performance
Phoronix: Intel Core i3 LLVMpipe Performance
Last week I put out new numbers showing the LLVMpipe performance with the latest Gallium3D code found in Mesa 7.9-devel. This Gallium3D driver accelerates all operations on the CPU rather than a GPU as a better software rasterizer than what is currently available for Linux, but even with a hefty Intel Core i7 CPU the OpenGL acceleration was still quite slow. In this article using an Intel Core i3 mobile CPU we are looking at the LLVMpipe performance again, but this time comparing it to the Intel graphics performance and also looking at the impact that the clock frequency and Hyper Threading have on this Gallium3D driver that heavily utilizes the Low-Level Virtual Machine for its CPU optimizations.
Hello. Please help me understanding (i'm new to linux graphic drivers)..
What is the difference in definitions between Mesa, Gallium & LLVMpipe. I don't understand how they relate to each other.
Thanks in advanced!
if i understand it correctly, its like this:
mesa is the old classic linux driver (lacking performance optimisations in many cases)
Gallium3D is a layer thats somehow pushed in between the kernel and the driver itelf, which provides some kind of environment for the specia÷l gallium drivers. its an attempt to generalise graphics drivers (somehow).
LLVMpipe is a kind of Software Rasteriser (no hardware acceleration by the GPU) which is being developed on the Gallium3D infrastructure using LLVM (Low Level Virtual Machine) which somehow has some pluses in Linking optimisations while compiling the driver (or even recompiling for optimisations while at play???)
well, correct me if im wrong...
how can it actually be that the framerate doesnt jump to twice the framerate as without HT when the HT is being reactivated? (@2.60GHz)
is it some kind of bottleneck or overhead or is the state of the driver not as finished as it seemed to be to me?
the game itself isnt using multiple cores i guess, but does LLVMpipe really do?
Why should it? It's not like you get 4 more physical cores. Even in Intel's PR material best-case-scenarios HT is only something like 30% boost.
Originally Posted by jakubo
I think it's inappropriate to call software-rendering LLVMpipe "accelerated". Acceleration in graphics context means hardware acceleration, that is, use of hardware that is dedicated to increase graphics performance. If you render using the CPU you're using the non-accelerated way of drawing graphics. That is not say that non-accelerated couldn't be faster than "accelerated". First "3D accelerators" were notorious for being slower than drawing things with just CPU. :-) To get real feeling of performance of LLVMpipe it should be compared with classic Mesa software renderer.
Originally Posted by phoronix
PS. For neophytes that are wondering why bother with software-rendering at all, LLVMpipe's real importance is that it's a prototype for GPU acceleration. LLVM can be adapted to compile for GPUs and thus get the most out of GPU-driven architectures (after we first get LLVMpipe work, and get LLVM to compile for GPUs.) Also, Brian Paul's Mesa has been a software reference for proper OpenGL. So you could verify that your hardware driver works correctly if it produces the same output as software Mesa.
Hey that was actually a Test on Phoronix which was quite good.
No endless rowing of charts and more trying to interpret things.
Also, I don't know why phoronix feels the urge to release soooo many tests. Most good testsites keep the numbers down and quality up.
You guys can just keep it low a bit and take your time making good tests.
But one thing I wanna know is, why differ the memory usage numbers between different clockrates so much?
I don't understand why you're comparing a software rasterizer to one in which there is hardware acceleration. It's not an apples to apples comparison. There shouldn't be any relation between the two.
LLVMPipe over time is what's the most interesting. That and comparisons with classic mesa software rendering.
I think it's just to give a meaningful point of reference...
... and maybe a response to all the people saying that software rasterizers will stomp low end GPUs into the dirt