For many months
there has been a "shader optimization" branch of Mesa/R600g
that sought to rather noticeably boost the performance of the AMD R600 Gallium3D driver. While this work by Vadim Girlin didn't look like it would be merged, after being revived and cleaned-up, it might reach mainline Mesa/Gallium3D as a new performance-boosting option.
Vadim Girlin had been working on shader optimizations for some time to more efficiently generate shader code and the back-end has evolved quite a bit in recent months. Diminishing prospects for this code has been that it doesn't use the R600 LLVM GPU back-end, which will eventually become the default for AMD's Gallium3D driver as it's needed for OpenCL/GPGPU support. With this custom back-end not using LLVM, it looked like it wouldn't be merged, but now the story is different.
Some time has been spent cleaning up the code and also to better isolate this shader optimization back-end from the rest of the R600g driver. Now this back-end is in a rather standalone state that only hooks into the R600g driver in a few spots and the shader code passed to/from the "r600-sb" component is hardware bytecode. With this lower maintenance burden, it stands better chances of being merged for at least a temporary and experimental Mesa build option.
Vadim writes, "this branch already works and provides good results in many cases. That's why I think it makes sense to merge this branch as a non-default backend at least as a temporary solution for shader performance problems."
Aside from faster performance over the open-source Radeon driver's current default back-end and the LLVM back-end, there's also debugging benefits and other advantages to toying with this "r600-sb" implementation.
Some initial benchmark results and other information on the proposal to merge r600-sb into mainline Mesa can be found with this mailing list message
. So far the feedback is fairly positive about being able to merge this as at least an experimental option that can be turned on by interested users/developers.