Intel Aims To Hit On Performance, Plans LLVM Compiler
Phoronix: Intel Aims To Hit On Performance, Plans LLVM Compiler
Eric Anholt of Intel's Open-Source Technology Center had a few things to say yesterday at the 2012 GStreamer Conference about their open-source Linux graphics driver stack. Intel is aiming to hit hard on performance improvements and one of the interesting statements made is that they're now looking at moving to an LLVM-based shader compiler for a big performance win. Left 4 Dead 2 running on Mesa was also shown...
Nice! Right on the heels of the Windows vs. Linux comparison. Glad to find out Intel is aware of the performance deficiencies and are working to correct them.
Out of curiosity.
Will this jump to LLVM help G3D in any way??
a tech question
Someone expert can please explain me in simple words what a shader compiler and IR are, and why LLVM is an improvement in speed?
i smell a valve console with intel cpu and graphics coming
There's not a video of that presentation available somewhere, is there? I'd love to actually watch it.
OpenGL programs perform all kinds of operations with GLSL programs these days. These programs are compiled by a shader compiler, which translates and optimizes the GLSL code into a form that the GPU can efficiently execute.
Originally Posted by TeoLinuX
IR means "intermediate representation" and is (in a few different forms) what the shader program is translated into inside the compiler. It's a form that allows the compiler to more easily transform and optimize it, before finally emitting assembly code for the GPU.
LLVM should provide a good improvement in performance because our current compiler is lacking a lot of useful optimizations (and a lot of the infrastructure needed to implement them well!). LLVM provides both a good infrastructure and many of these needed optimization passes.
Graphical applications running on modern GPUs make use of shaders, small programs which are executed on every vertex ("vertex shaders"), on every pixel ("fragment or pixel shaders") etc...
Originally Posted by TeoLinuX
The shader programs typically run on specialized GPU hardware, basically a number of small processors capable of running many copies of the same program in parallel on different data (eg different vertices or different fragments). That model is usually referred to as SIMD (single instruction, multiple data).
The shader programs may be written as part of the application program, or may be generated by the driver stack in order to emulate older fixed function graphics hardware on a modern shader-based GPU. Since different GPUs have different hardware instruction sets for the SIMD processors, shader programs are written in a portable high level language (typically GLSL for OpenGL, HLSL for DX) and JIT-compiled down to GPU-specific hardware instructions at runtime.
In principle each driver could include a big compiler stack that goes directly from GLSL to hardware instructions. In practice, the most common approach is to split the compiler code into two parts -- one going from the high level language (eg GLSL) to an intermediate representation (IR), and another going from IR to hardware instructions. This approach significantly reduces the amount of GPU-specific code.
The first part of the compiler stack in mesa is generally referred to as the GLSL compiler these days, while the second part is referred to as a shader compiler. Strictly speaking you could call the whole stack the shader compiler (since it compiles shaders written in GLSL down to hardware instructions).
In principle existing compiler frameworks can't do anything that could not also be done in a purpose-written shader compiler, but in practice writing a good optimizing shader compiler is a *lot* of work. Developers are hoping that using LLVM will let them produce a shader compiler which generates better performing code than spending a similar amount of time working on a custom shader compiler, which seems reasonable.
thanks to both of you.
now the picture is clearer.
Speaking of threading, my recollection was that Marek added a degree of multithreading to the r300g driver (using a helper thread to perform the command submission calls into drm) a year or two ago and I thought he made the same change to r600g as well. I had been under the impression that the other mesa drivers did something similar.