Announcement

Collapse
No announcement yet.

LLVMpipe's Geometry Processing Pipeline Kicks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    yes, llvm was used this way by Apple in their opengl stack.

    Programming, Web Development, and DevOps news, tutorials and tools for beginners to experts. Hundreds of free publications, over 1M members, totally free.

    Comment


    • #22
      Originally posted by not.sure View Post
      Wasn't LLVM also planned to be used to compile/optimize/generate shader code for specific GPUs? Or is that something entirely different?
      I want this answered as well, because, aside from the general answer, I'm most curious how this would work for VLIW designs if LLVM has no support whatsoever for none

      Comment


      • #23
        I'm not aware of anyone using LLVM to generate shader code for GPUs right now, VLIW or scalar.

        LLVM is being used to generate optimized graphics IR, and is also being used to convert that IR into x86 code, but that's it AFAIK.
        Test signature

        Comment


        • #24
          There has been some talk about changing the current

          Gallium IR -> GPU compiled code

          to

          Gallium IR -> LLVM -> Gallium IR -> GPU compiled code

          which would avoid the need for modifying LLVM to work with VLIW architecture but allow the general optimizations to still be done. That would also instantly work for all hardware, instead of requiring new LLVM code for every new card.

          Comment


          • #25
            Are there plans to make llvmpipe the default software rasterizer?

            Comment


            • #26
              Originally posted by wswartzendruber View Post
              Won't r300g utilize this for the parts of OpenGL 3 that require unimplemented functionality?
              r300g won't support OpenGL 3. We try as much as possible not to use any kind of software fallback. A dumb app may suddenly decide to use more features and then the driver would pretty much become a software rasterizer. Nobody wants that. Moreover this article is only about vertex processing using LLVM which cannot be used for GL3 fragment processing. Anyway it appears to be a lot slower than old r500 hw but still faster than swrast.

              Originally posted by rohcQaH View Post
              openGL-call -> geometry shaders -> vertex shaders -> pixel shaders -> final image
              This is wrong, the geometry shader comes after the vertex shader.

              Originally posted by curaga View Post
              Are there plans to make llvmpipe the default software rasterizer?
              Well it's logical isn't it.

              Comment


              • #27
                Originally posted by marek View Post
                This is wrong, the geometry shader comes after the vertex shader.
                thanks for the correction.

                I haven't found much information about geometry shaders on the web except for dry technical specs. If you got any good links, please share.

                Comment


                • #28
                  So a Gallium3D driver like r300g can straight-up disallow any software fallback?

                  Comment


                  • #29
                    The idea with gallium is all or nothing. As previously noted, fallbacks are usually slower than just rendering the whole pipeline with the CPU directly so if the GPU can't handle something, just do the whole thing on the CPU.

                    Comment


                    • #30
                      Originally posted by rohcQaH View Post
                      I haven't found much information about geometry shaders on the web except for dry technical specs. If you got any good links, please share.
                      Well the little technical GL_ARB_geometry_shader4 specification is as good as it gets. The most widespread misconception of geometry shaders is that it's a good match for tessellation - it really isn't and has never been, there are specialized shader stages for that in GL4.

                      The geometry shader simply consumes one primitive of some type (points, lines, triangles) and emits one or more primitives of another type. It allows for converting point sprites and wide lines to triangles (pretty useless in GL), or generating lines for celshading. Now the very important feature is that for each emitted primitive, you can choose a render target where it should go. This allows for rendering a scene to several textures each time from a different position and orientation in space using *one* draw call, making it possible to render to the whole cubemap or 3D texture in one pass. You have also a read-only access to a couple of surrounding primitives but it doesn't seem to be very useful (you cannot even compute smooth normals with it). There are many applications but most of them are rather non-obvious and generally geometry shaders aren't as useful as they have been claimed to be. Certainly it's the most useless shader stage and I think it's useless in general, ask any professional game engine developer, he will tell you....

                      Originally posted by wswartzendruber View Post
                      So a Gallium3D driver like r300g can straight-up disallow any software fallback?
                      Currently it's impossible for a gallium driver to fallback to software entirely so there is nothing to disallow. The GL state tracker does have some fallbacks but it's unlikely you would hit either of them really. The meta-driver called failover was originally designed for switching between a hw and sw driver on the fly but it's unmaintained and rotting for a couple of years now.

                      Comment

                      Working...
                      X