Announcement

Collapse
No announcement yet.

The Mesa On-Disk Shader Cache Has Been Revised Again (V5)

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • The Mesa On-Disk Shader Cache Has Been Revised Again (V5)

    Phoronix: The Mesa On-Disk Shader Cache Has Been Revised Again (V5)

    Timothy Arceri of Collabora has revised his massive patch-set that implements an on-disk shader cache for the Intel open-source driver...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Originally posted by atomsymbol
    I do not mean to sound like I am disputing the work on shader cache, but the primary reason for the patch-set being massive is that C/C++ has very limited meta-programming/reflection capabilities. It isn't a fault of the C/C++ language per se, but rather it can be interpreted as being the fault of the compiler.

    See also the original paper on Lisp by McCarthy (year 1960).
    I think I'm speaking for most of the people here when I say I don't understand what you mean.

    Comment


    • #3
      Originally posted by atomsymbol
      the primary reason for the patch-set being massive is that C/C++ has very limited meta-programming/reflection capabilities. It isn't a fault of the C/C++ language
      there is no c/c++ language. code in question is written in c language, c++ language has wonderful meta-programming capabilities

      Comment


      • #4
        Originally posted by atomsymbol
        I do not mean to sound like I am disputing the work on shader cache, but the primary reason for the patch-set being massive is that C/C++ has very limited meta-programming/reflection capabilities. It isn't a fault of the C/C++ language per se, but rather it can be interpreted as being the fault of the compiler.
        Hogwash. We've had a shader cache above GL (by using the shader binary read-back extension) for years and the code involved is tiny. and it's in pure C. it has nothing to do with C or C++ or "meta programming" of any sort. loading the stored/cached binary shader is 1 line in C. (4 lines if you include the "zero copy isn't working so we'll do a copy" fallback and add 1 line for opening the cache archive). it'd be no smaller in any other language. the rest of loading a shader binary stored on disk into a program is all a bunch of telling gl something and it'd be the same amount of code regardless of language. you need to lookup symbols and map them to enums. saving the shader binary is also 1 line of code. it gets stuffed into an archive file and compressed.

        Comment


        • #5
          Originally posted by raster View Post

          Hogwash. We've had a shader cache above GL (by using the shader binary read-back extension) for years and the code involved is tiny. and it's in pure C. it has nothing to do with C or C++ or "meta programming" of any sort. loading the stored/cached binary shader is 1 line in C. (4 lines if you include the "zero copy isn't working so we'll do a copy" fallback and add 1 line for opening the cache archive). it'd be no smaller in any other language. the rest of loading a shader binary stored on disk into a program is all a bunch of telling gl something and it'd be the same amount of code regardless of language. you need to lookup symbols and map them to enums. saving the shader binary is also 1 line of code. it gets stuffed into an archive file and compressed.
          Sure, but such an implementation would unfortunately be useless with Mesa, which doesn't implement glProgramBinary in any of its drivers. From a cursory look at the patches, the majority seems to deal with proper (de/)serialization, not bookkeeping.

          Comment


          • #6
            Originally posted by Ancurio View Post

            Sure, but such an implementation would unfortunately be useless with Mesa, which doesn't implement glProgramBinary in any of its drivers. From a cursory look at the patches, the majority seems to deal with proper (de/)serialization, not bookkeeping.
            i'm probably missing something, but according to mesamatrix, glProgramBinary is implemented in all drivers 2 years ago

            Comment


            • #7
              Originally posted by trek View Post

              i'm probably missing something, but according to mesamatrix, glProgramBinary is implemented in all drivers 2 years ago
              It is a noop. Mesa GL_NUM_SHADER_BINARY_FORMATS returns 0 afaik.

              Comment


              • #8
                Originally posted by log0 View Post

                It is a noop. Mesa GL_NUM_SHADER_BINARY_FORMATS returns 0 afaik.
                Exactly.

                But bigger issue (at least for AMD) is that there is NO binary shader at all. The data needed to actually execute something on the actual hardware is only present just before framebufer will be written to.

                That's why AMD team wants to work on splitting their "shaders" into parts that are constant and parts that depend on the opengl state. Something Vulkan made explicit. Then shader cache would only contain that constant core, and would be reusable regardless of additional settings game/app provide.

                Comment


                • #9
                  Originally posted by atomsymbol

                  I do not mean to sound like I am disputing the work on shader cache, but the primary reason for the patch-set being massive is that C/C++ has very limited meta-programming/reflection capabilities. It isn't a fault of the C/C++ language per se, but rather it can be interpreted as being the fault of the compiler.

                  See also the original paper on Lisp by McCarthy (year 1960).
                  Are you serious? Yeah, go program everything in lisp and watch your performance tank.

                  Comment


                  • #10
                    Originally posted by atomsymbol

                    I do not mean to sound like I am disputing the work on shader cache, but the primary reason for the patch-set being massive is that C/C++ has very limited meta-programming/reflection capabilities. It isn't a fault of the C/C++ language per se, but rather it can be interpreted as being the fault of the compiler.

                    See also the original paper on Lisp by McCarthy (year 1960).
                    You sound like you sure know precisely what you're doing, why not include SBCL with Mesa; and write your own shader cache. You'll see just how productive you are when your GC is blocking a frame in Shadow of Mordor. To me, it seems like your plan is more likely to introduce jitter, rather than reduce it; achieving the exact opposite goal of the shader cache.

                    Comment

                    Working...
                    X