Embedding A Code Compiler With GPU Drivers

Written by Michael Larabel in Display Drivers on 14 February 2009 at 08:38 AM EST. Page 1 of 2. 2 Comments.

During the X.Org meetings at FOSDEM, Stephane Marchesin had discussed what he and other open-source developers are doing by using a code compiler (LLVM) and interweaving it with the Gallium3D driver architecture. By strapping the Low-Level Virtual Machine to Gallium3D, developers are hoping they can use the power of this relatively new compiler infrastructure to provide advanced GPU shader optimizations. This is not exactly an easy task, but it is believed it can be accomplished with beneficial results and they are making progress.

The idea of embedding LLVM into a graphics framework to provide shader compilation and optimization capabilities is not an original idea for Linux developers, but in fact it is already being done by Apple. With Mac OS X, the Low-Level Virtual Machine is being used in a similar way to speed as a shader optimization/generation engine on both the CPU and GPU. Right now though, the open-source 3D drivers on Linux are running at a snail's pace, as many of you will agree, so any efforts to improve the performance will certainly be welcomed.

With this work, Stephane and others are targeting advanced optimizations where the intermediate language will support all architectures and all optimizations. As much code as possible will be shared between the Gallium3D drivers in order to keep the optimizations driver-independent, which is also important considering there is no surplus of X.Org / Mesa developers. Stephane's design also targets multiple CPU architectures, with being able to use SSE optimizations for instance.

LLVM was picked instead of another compiler infrastructure, like GCC, largely because of its modular design. Once this design has been implemented, instead of having the IR (intermediate representation) language being TGSI (Tungsten Graphics Shader Infrastructure) for Gallium3D, the back-end sent to the IR is then an LLVM IR after making Gallium3D modifications. This work only affects shaders in Gallium3D and nothing else. An LLVM back-end must be written for each Gallium3D hardware driver. There is also the capability of having an LLVM back-end even for fixed-pipe GPU drivers.

Today they do have a TGSI to LLVM IR translator written along with partial LLVM code generations on the Nouveau side for NVIDIA hardware. There are partial LLVM shader code generators for the NV40, NV50, NV04, and the ATI R300 series. The ATI LLVM + Gallium3D work is being done by Corbin Simpson, which got his first successful build in late December. Today they also have code generation to the GPU for vertex shaders also working.

What is left to accomplish is finishing the LLVM back-ends, iron out code generation bugs, add new LLVM optimization passes, and changes in LLVM itself (more intermediate level instructions and straightforward support for VLIW). As for a time-frame when this work should be in good standing, Stephane believes it could be working well in about one year.


Related Articles