LLVMpipe/Lavapipe Patches Land To Allow For Greater CPU Saturation
LLVMpipe as the software OpenGL implementation that runs atop the CPU and Lavapipe as the Vulkan equivalent have seen a recent uptick in activity.
The work talked about last week for faster vertex and fragment shader processing have been merged into Mesa 22.1. That is the work by David Airlie he originally started on some two years ago when tackling tessellation shaders but then recently revisited. More recently developers have noticed this work can help boost the ParaView data visualization software performance by 13~67% depending upon the test scene.
The merge request has landed and long story short this overlapping of vertex and fragment processing should allow for more complete CPU saturation for LLVMpipe OpenGL and Lavapipe Vulkan software drivers.
Mesa 22.1 will be out around late May or early June as the Q2 feature update to this open-source Linux graphics driver stack. There still is another 1~2 months roughly until the Mesa 22.1 feature freeze so we'll see what more work lands in time for LLVMpipe/Lavapipe and the open-source drivers at large.
The work talked about last week for faster vertex and fragment shader processing have been merged into Mesa 22.1. That is the work by David Airlie he originally started on some two years ago when tackling tessellation shaders but then recently revisited. More recently developers have noticed this work can help boost the ParaView data visualization software performance by 13~67% depending upon the test scene.
The merge request has landed and long story short this overlapping of vertex and fragment processing should allow for more complete CPU saturation for LLVMpipe OpenGL and Lavapipe Vulkan software drivers.
Mesa 22.1 will be out around late May or early June as the Q2 feature update to this open-source Linux graphics driver stack. There still is another 1~2 months roughly until the Mesa 22.1 feature freeze so we'll see what more work lands in time for LLVMpipe/Lavapipe and the open-source drivers at large.
1 Comment