Gallium3D LLVMpipe Starts To Smoke
Phoronix: Gallium3D LLVMpipe Starts To Smoke
While it's rare for a few days to pass at Phoronix without pulling the latest development code for Mesa / Gallium3D and the Linux kernel DRM in order to run updated Radeon, Intel, and Nouveau Linux graphics benchmarks, LLVMpipe isn't benchmarked as commonly. LLVMpipe is the new CPU-based software acceleration method for the Gallium3D that leverages the Low-Level Virtual Machine to provide better performance than the classic Mesa software rasterizer or Softpipe. Fortunately, upon running a brand new set of tests, the results show a bit more promise but there is still much work ahead.
It's nice to see the state of llvmpipe, and it's very impressive for a purely software renderer. But how on Earth did you manage to get 20 fps with HD 5450 in OpenArena?
I get close to 60fps in 1080p with an HD 4550 (which is about equal in terms of performance), and this has been the case for about a year. 25fps at 1024x768 is something I can't achieve even with low power profile and anti-performance tweaks.
Since all tests but one are capped at exactly 60 fps, I'm guessing it's the vsync issue again.
It would be interesting to test this on the 6core i7 again and compare against the last test to see the improvements
Why comparing LLVMpipe (which is a software pipe) to RadeonHD and Nouveau (which are hardware accelerated pipes) ?
Why not comparing LLVMpipe to Mesa, to see the real improvements between the two solutions?
These are real questions, I might have misunderstood some subtlety about the rendering pipe...
Originally Posted by FireBurn
Compare Mesa and LLVMpipe with CPU with more and more cores, to see if LLVM handles it better than GCC. (but I make a lot of assumptions here, correct me if I'm wrong)
I agree, dedicated hardware should always win
Originally Posted by Creak
That first OpenArena test looks a bit suspicious to me. Not in the testing methodology, but that we may have an opportunity to identify a CPU bottleneck in the Gallium/LLVM/Radeon architecture.
Originally Posted by FireBurn
One of these days I'm hoping to starting profiling the r600 driver to see if there's any way I can help. Otherwise, I'll probably start working on fleshing out the GSoC clover implementation a bit (implementing built-in math functions and stuff, conformance testing, etc).
BTW... I'd like to buy a new graphic card, is it good to take an ATI now? I prefer their "open" policy, but I don't know if the open source driver is good enough.
For instance, I have an nVidia card at the moment and if I use Nouveau, the temperature gets higher and higher, way above 100°C.
It took me some time to understand why my PC was freezing without any reason...
Since RadeonHD drivers are based on open specs and aren't retro-engineered, are they more stable?
I don't play huge games on my Linux, but I do 3D programming.
I'd be glad to help you in this as soon as I've got an ATI card (see previous message).
Originally Posted by Veerappan
I continue to wonder if this will ever breathe any life into the non-T&L Radeon Xpress 1100/200M
I suppose the desired behavior would be to fall back to LLVMPipe instead of the current soft pipe for operations not supported by the GPU.