Announcement

Collapse
No announcement yet.

AMD AI Compiler Engineer Lands A Generic MLIR To SPIR-V Pass In LLVM 19

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • AMD AI Compiler Engineer Lands A Generic MLIR To SPIR-V Pass In LLVM 19

    Phoronix: AMD AI Compiler Engineer Lands A Generic MLIR To SPIR-V Pass In LLVM 19

    Merged on Friday to LLVM 19 Git is a generic MLIR to SPIR-V pass for lowering the Multi-Level Intermediate Representation down into SPIR-V as the intermediate representation consumed by OpenGL / OpenCL / Vulkan drivers...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    One small step for mlir, one giant leap for compute-kind!

    However - what does this mean overall for using AMD owned SHARK image generation with non-AMD GPUs? Also for using MLIR directly for not only Stable Diffusion workloads, but others as well?
    Last edited by Eirikr1848; 22 June 2024, 05:04 PM.

    Comment


    • #3
      Vulkan Compute go go go defeat the CUDA monopole...
      Phantom circuit Sequence Reducer Dyslexia

      Comment


      • #4
        Originally posted by qarium View Post
        Vulkan Compute go go go defeat the CUDA monopole...
        You think SPIR-V=(Vulkan+OpenCL+GL-Compute Shaders+MLIR) will be a CUDA competitor.

        I'm very non-knowlegeable here but it seems like ZLUDA is better positioned for all of this.

        Also: SPIR is not "on the fly" compute shader disassembly and rebuilds, it seems to be very much pre-programmed shaders. (Or am I thinking of the spirv-cross project?)

        Regardless, MLIR seems to add a very versatile and important component to this lower level IR stack, but without... More... It's just not CUDA.

        Comment


        • #5
          Originally posted by Eirikr1848 View Post
          You think SPIR-V=(Vulkan+OpenCL+GL-Compute Shaders+MLIR) will be a CUDA competitor.
          I'm very non-knowlegeable here but it seems like ZLUDA is better positioned for all of this.
          Also: SPIR is not "on the fly" compute shader disassembly and rebuilds, it seems to be very much pre-programmed shaders. (Or am I thinking of the spirv-cross project?)
          Regardless, MLIR seems to add a very versatile and important component to this lower level IR stack, but without... More... It's just not CUDA.
          ZLUDA is not a CUDA competitor because ZLUDA is in fact CUDA its a CUDA copy...

          Vulkan Compute SPIR-V is a CUDA competitor because it is NOT CUDA...
          Phantom circuit Sequence Reducer Dyslexia

          Comment


          • #6
            This is pretty exciting. Right now you can do ML tasks on any GPU with WebGPU (at least using the Burn framework). I'm not sure how that contrasts with SPIR-V but I'm assuming SPIR-V is lower-level and would provide more control over kernels (plus I think right now WebGPU doesn't even support certain float types common in some ML models, although that's being remedied).
            I don't think anything is ever going to challenge NVidia's dominance, NVidia GPUs have basically become the Windows of the business world's ML side, the enterprise markets will probably be using them for the next couple decades. But at least we finally get *any* way to run ML models on other GPUs, which has been unreasonably impossible for far too long. That's especially going to be critical on consumer devices, which is still a market ripe for the taking and relatively untouched by NVidia (I'm counting phones/single board computers in this category, not just desktops), although they are gearing up to make their own ARM chips there as well, so it's a bit of a race for dominance. Still, open standards probably will win, because developers are the ones making these models, and developers always win, which is the very reason Microsoft was so successful early on, because MS always knew developers were where the money was. The difference is, back then, developers didn't care about open ecosystems because they didn't yet get bitten by proprietary offerings, but today, they very much know better.

            Comment


            • #7
              Originally posted by Ironmask View Post
              This is pretty exciting. Right now you can do ML tasks on any GPU with WebGPU (at least using the Burn framework). I'm not sure how that contrasts with SPIR-V but I'm assuming SPIR-V is lower-level and would provide more control over kernels (plus I think right now WebGPU doesn't even support certain float types common in some ML models, although that's being remedied).
              SPIR-V is kind of a compile target for different shading languages. The WebGPU Shading Language will probably at some point be complied to SPIR-V for usage with its vulkan back-end.


              Comment


              • #8
                Originally posted by qarium View Post
                Vulkan Compute go go go defeat the CUDA monopole...
                Then RADV should just drop ACO and use LLVM to exploit this. :-D

                Comment


                • #9
                  Originally posted by zboszor View Post
                  Then RADV should just drop ACO and use LLVM to exploit this. :-D
                  not sure if this is a sane comment. ACO beats LLVM in the most benchmarks.

                  LLVM is only there because it has a more general approach this means if performance is not the focus LLVM is fine and will give you results with less effort.

                  they could port it to ACO and make it even faster.
                  Phantom circuit Sequence Reducer Dyslexia

                  Comment

                  Working...
                  X