Announcement

Collapse
No announcement yet.

Mesa's Shader Cache Is In The Process Of Landing

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by pal666 View Post
    does not matter. it will be kept in page cache
    when it's read of the disk for the first time ?
    no

    anyway
    lets say an older drive with... 5ms latency
    at 60 fps the game has ~11.5ms to complete the rendering (or, without vsync, whatever)

    Comment


    • #22
      Originally posted by gens View Post
      when it's read of the disk for the first time ?
      no

      anyway
      lets say an older drive with... 5ms latency
      at 60 fps the game has ~11.5ms to complete the rendering (or, without vsync, whatever)
      Not to mention that latency for disks is always higher than the label rating. And it's variable, it's never the same from one seek to the next. But, I think using the page cache should hide most of that. That's probably the best way to handle it honestly.

      Comment


      • #23
        Originally posted by smitty3268 View Post
        I think the initial implementations were 100% fully shared, with no custom driver code needed at all. If there is some, i'd expect it to be pretty minimal.
        Most likely the only new code needed in the drivers is the code to reference the cache. Would the benefit gained is better performance due to utilizing the cache rather than recompiling often used GL code?

        Comment


        • #24
          Open gl 5 should fix most of this. Since the shaders will be pre-compiled into bytecode, there won't be delays/hitches because of the drivers having to compile shaders all the time.

          Comment


          • #25
            Originally posted by ua=42 View Post
            Open gl 5 should fix most of this. Since the shaders will be pre-compiled into bytecode, there won't be delays/hitches because of the drivers having to compile shaders all the time.
            Kronos (in my opinion) does pretty good at getting decent specs out. Maybe not in a timely manner, but when it happens it's good stuff. nVidia will probably be the first to support it with their proprietary driver. I'm sure it will be a few years from now before we see it in the oss driver.

            In the mean time though, this work seems good.

            Comment


            • #26
              Originally posted by ua=42 View Post
              Open gl 5 should fix most of this. Since the shaders will be pre-compiled into bytecode, there won't be delays/hitches because of the drivers having to compile shaders all the time.
              You wish If Khronos picks a bytecode, the driver will simply pass it to the compiler, slow slow LLVM* for radeonsi. Little to no savings in time.

              * LLVM is fast vs GCC, in realtime horribly slow.

              Comment


              • #27
                Originally posted by curaga View Post
                You wish If Khronos picks a bytecode, the driver will simply pass it to the compiler, slow slow LLVM* for radeonsi. Little to no savings in time.

                * LLVM is fast vs GCC, in realtime horribly slow.

                Probably shaders will be distributed in IR in the same level as LLVM (or LLVM future version itself), with half the optimizations done.

                Comment


                • #28
                  Not to mention I've had individual shaders take up to 30 seconds each to compile (Natural Selection 2 on linux is sh$t, don't buy), I'm pretty sure the shader cache will help and the OpenGL 5 Shader IR will be a serious improvement.

                  Comment


                  • #29
                    Originally posted by ua=42 View Post
                    Not to mention I've had individual shaders take up to 30 seconds each to compile (Natural Selection 2 on linux is sh$t, don't buy), I'm pretty sure the shader cache will help and the OpenGL 5 Shader IR will be a serious improvement.
                    If the shaders were overly complex or badly written yes then there'd be a long compile time. Maybe if shader writers took the time to carefully write their shader code then there'd might not be a big problem.

                    Comment


                    • #30
                      Originally posted by smitty3268 View Post
                      I think the initial implementations were 100% fully shared, with no custom driver code needed at all. If there is some, i'd expect it to be pretty minimal.
                      Yes, I was one of the guys testing it on AMD hardware.

                      Originally posted by GreatEmerald View Post
                      Nice, this should help with keeping stutters down.
                      No.

                      Originally posted by curaga View Post
                      ...or it will increase them if you're on a HD and it is busy.
                      Also no. Why not? Well, there's already a in memory shader cache handling that. The on-disc cache will speedup the first loading of a shader only (read: Game and/or level loading) and even there the speedup isn't as high as you would expect it to be.

                      Originally posted by Adarion View Post
                      Where is that shader cache located?
                      ~/.cache/mesa - It's a directory with one file per cached shader. Each file has its SHA-1 hash as filename.

                      Comment

                      Working...
                      X