Intel's Investing In Some Mesa Optimizations
Phoronix: Intel's Investing In Some Mesa Optimizations
Earlier this month an Intel employee began asking about making optimizations to Mesa's shader compiler (on the Mesa-dev list). This Intel employee was not one of their usual Open-Source Technology Center developers commonly working on their Linux graphics stack as part of Keith Packard's team, but instead it was an uncommon name: Benjamin Segovia. Ben is from Intel's Advanced Graphics Lab team where previously he worked on ray-tracing techniques, but as of late seems to be at least dedicating some of his Intel effort towards optimizing Mesa...
Does this work also benefits to gallium, or just to mesa classic ?
I'm guessing that this is mostly for Mesa Classic, and not Gallium, but I don't have anything to really go on.
Originally Posted by dieppe
The best reason I can come up with for that hunch is that there's a separate Mesa state tracker for when Gallium3D is in use, and it wasn't mentioned that any of these optimizations were being developed for the Mesa ST. It would also make some sense as Intel isn't currently focusing on Gallium for their integrated graphics chips, but they're focusing on classic Mesa (from all I've read).
Ho Hum, I seem to recall that the GLSL compiler is shared with both Gallium3D and classic Mesa, but I don't really have anything to back this up with.
Maybe somebody with a better understanding of things can chime in?
Now, what kind of speed difference does 40% less GPU instructions cause?