Announcement

Collapse
No announcement yet.

LLVMpipe's Geometry Processing Pipeline Kicks

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • phoronix
    started a topic LLVMpipe's Geometry Processing Pipeline Kicks

    LLVMpipe's Geometry Processing Pipeline Kicks

    Phoronix: LLVMpipe's Geometry Processing Pipeline Kicks

    A month ago we talked about Gallium3D's LLVMpipe performing well and providing a much better software rasterizer than what is available with classic Mesa. Using LLVMpipe and a modest CPU for acceleration, the OpenArena was just about playable without any GPU assistance. Now a month later LLVMpipe is becoming a even more serious performer. LLVMpipe now is able to tap into the new geometry processing pipeline and it's causing some major performance gains...

    http://www.phoronix.com/vr.php?view=ODE5Mw

  • Michael
    replied
    Initial LLVMpipe benchmarks to be benchmarked in the morning (as a direct result of this for getting the benchmarks out this week).

    Leave a comment:


  • marek
    replied
    Originally posted by rohcQaH View Post
    I haven't found much information about geometry shaders on the web except for dry technical specs. If you got any good links, please share.
    Well the little technical GL_ARB_geometry_shader4 specification is as good as it gets. The most widespread misconception of geometry shaders is that it's a good match for tessellation - it really isn't and has never been, there are specialized shader stages for that in GL4.

    The geometry shader simply consumes one primitive of some type (points, lines, triangles) and emits one or more primitives of another type. It allows for converting point sprites and wide lines to triangles (pretty useless in GL), or generating lines for celshading. Now the very important feature is that for each emitted primitive, you can choose a render target where it should go. This allows for rendering a scene to several textures each time from a different position and orientation in space using *one* draw call, making it possible to render to the whole cubemap or 3D texture in one pass. You have also a read-only access to a couple of surrounding primitives but it doesn't seem to be very useful (you cannot even compute smooth normals with it). There are many applications but most of them are rather non-obvious and generally geometry shaders aren't as useful as they have been claimed to be. Certainly it's the most useless shader stage and I think it's useless in general, ask any professional game engine developer, he will tell you....

    Originally posted by wswartzendruber View Post
    So a Gallium3D driver like r300g can straight-up disallow any software fallback?
    Currently it's impossible for a gallium driver to fallback to software entirely so there is nothing to disallow. The GL state tracker does have some fallbacks but it's unlikely you would hit either of them really. The meta-driver called failover was originally designed for switching between a hw and sw driver on the fly but it's unmaintained and rotting for a couple of years now.

    Leave a comment:


  • agd5f
    replied
    The idea with gallium is all or nothing. As previously noted, fallbacks are usually slower than just rendering the whole pipeline with the CPU directly so if the GPU can't handle something, just do the whole thing on the CPU.

    Leave a comment:


  • wswartzendruber
    replied
    So a Gallium3D driver like r300g can straight-up disallow any software fallback?

    Leave a comment:


  • rohcQaH
    replied
    Originally posted by marek View Post
    This is wrong, the geometry shader comes after the vertex shader.
    thanks for the correction.

    I haven't found much information about geometry shaders on the web except for dry technical specs. If you got any good links, please share.

    Leave a comment:


  • marek
    replied
    Originally posted by wswartzendruber View Post
    Won't r300g utilize this for the parts of OpenGL 3 that require unimplemented functionality?
    r300g won't support OpenGL 3. We try as much as possible not to use any kind of software fallback. A dumb app may suddenly decide to use more features and then the driver would pretty much become a software rasterizer. Nobody wants that. Moreover this article is only about vertex processing using LLVM which cannot be used for GL3 fragment processing. Anyway it appears to be a lot slower than old r500 hw but still faster than swrast.

    Originally posted by rohcQaH View Post
    openGL-call -> geometry shaders -> vertex shaders -> pixel shaders -> final image
    This is wrong, the geometry shader comes after the vertex shader.

    Originally posted by curaga View Post
    Are there plans to make llvmpipe the default software rasterizer?
    Well it's logical isn't it.

    Leave a comment:


  • curaga
    replied
    Are there plans to make llvmpipe the default software rasterizer?

    Leave a comment:


  • smitty3268
    replied
    There has been some talk about changing the current

    Gallium IR -> GPU compiled code

    to

    Gallium IR -> LLVM -> Gallium IR -> GPU compiled code

    which would avoid the need for modifying LLVM to work with VLIW architecture but allow the general optimizations to still be done. That would also instantly work for all hardware, instead of requiring new LLVM code for every new card.

    Leave a comment:


  • bridgman
    replied
    I'm not aware of anyone using LLVM to generate shader code for GPUs right now, VLIW or scalar.

    LLVM is being used to generate optimized graphics IR, and is also being used to convert that IR into x86 code, but that's it AFAIK.

    Leave a comment:

Working...
X