Announcement

Collapse
No announcement yet.

ZLUDA Has Been Seeing New Activity For CUDA On AMD GPUs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • ZLUDA Has Been Seeing New Activity For CUDA On AMD GPUs

    Phoronix: ZLUDA Has Been Seeing New Activity For CUDA On AMD GPUs

    Back in February I wrote about AMD having quietly funded the effort for a drop-in CUDA implementation for AMD GPUs built atop the ROCm library. This was an incarnation of ZLUDA that originally began as a CUDA implementation for Intel GPUs using oneAPI Level Zero. While AMD discontinued funding ZLUDA development earlier this year, this CUDA implementation for AMD GPUs is continuing to see some new code activity...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    I'm also pretty excited to see where this goes. This is the sort of thing that could really make iGPUs from either Intel or AMD put to much greater use. Some of the best use-cases for CUDA in the desktop space only need the power of an iGPU.

    Comment


    • #3
      Any library that breaks lock-in is to be commended!

      Comment


      • #4
        How about a benchmark of ROCm vs ZLUDA in blender?

        Comment


        • #5
          So AMD has own specs and want implement NVIDIA specs and do everything according their development wishes? Money flows differently

          Comment


          • #6
            Originally posted by elbar View Post
            So AMD has own specs and want implement NVIDIA specs and do everything according their development wishes? Money flows differently
            Breaking vendor lock in is a pretty good way to kickstart adoption

            Comment


            • #7
              I wonder if it's possible to funnel an Nvidia card through Zluda to see how it compares to Cuda

              Comment


              • #8
                Originally posted by schmidtbag View Post
                I'm also pretty excited to see where this goes. This is the sort of thing that could really make iGPUs from either Intel or AMD put to much greater use. Some of the best use-cases for CUDA in the desktop space only need the power of an iGPU.
                Not CUDA-compatible*, but another honorable mention is the Burn framework written in Rust, which uses WGPU to run inference on any GPU. ZLUDA is good for existing CUDA codebases, but I feel like new models should look into targeting WGPU instead.

                (* It does have a CUDA backend, among the WGPU and CPU ones, I just mean it's not a drop-in replacement for CUDA, it is it's own ML framework)
                Last edited by Ironmask; 15 May 2024, 04:28 PM.

                Comment


                • #9
                  Originally posted by gufide View Post
                  How about a benchmark of ROCm vs ZLUDA in blender?
                  Zluda wins in blender. Blender devs are hardcore Nvidia fanboys. Rocm it's a second class citizen to them.

                  EDIT: even though zluda is itself written for Rocm, blenders usage of Rocm is less performance than their usage of cuda. So blenders cuda code path is faster. So zluda is faster.
                  Last edited by duby229; 15 May 2024, 04:58 PM.

                  Comment


                  • #10
                    Originally posted by Mitch View Post
                    I wonder if it's possible to funnel an Nvidia card through Zluda to see how it compares to Cuda
                    Zluda it's a Rocm implementation. Not unless Nvidia develops a Rocm library.

                    Comment

                    Working...
                    X