Announcement

Collapse
No announcement yet.

AMD ROCm 6.2 Release Appears Imminent For Advancing Open-Source GPU Compute

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • AMD ROCm 6.2 Release Appears Imminent For Advancing Open-Source GPU Compute

    Phoronix: AMD ROCm 6.2 Release Appears Imminent For Advancing Open-Source GPU Compute

    We appear to be on the heels of the AMD ROCm 6.2 software release for advancing the open-source AMD Radeon/Instinct GPU compute stack with new features...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    We were soft-promised (inofficial) support for my trusty old Navi1 GPU. I don't care what the wiki says as long as it has the gfx1010 tensile library (... and that is the last part needed to get pytorch etc running)

    Comment


    • #3
      I wish to see Ubuntu 24.04 support and 7800XT support: gfx1101 works for inference LLMs but not for running tensorflow and pytorch. When 7800XT is finally there, I would also like to see all Navi2 and all Navi1 getting supported as well. And since I am a bad and demanding person, I would like all these pretty much now, because if more months pass and Navi4 gets released, then I know how to expect the argument: "go buy our latest flagship", oh no no, I'm not spending another dime on AMD if I don't first see cuda-class support but with trully open-source stack.

      Comment


      • #4
        Running text-generation-webui on a recent amd processor is all I need from the ai world. Nvidia linux drivers are pure evil.

        Comment


        • #5
          Expand the range to at least all RDNA3 models onwards. Optimizations later.

          Comment


          • #6
            Anyway to install this with Xanmod kernel.

            Comment


            • #7
              Originally posted by djart View Post
              I wish to see Ubuntu 24.04 support and 7800XT support: gfx1101 works for inference LLMs but not for running tensorflow and pytorch. When 7800XT is finally there, I would also like to see all Navi2 and all Navi1 getting supported as well. And since I am a bad and demanding person, I would like all these pretty much now, because if more months pass and Navi4 gets released, then I know how to expect the argument: "go buy our latest flagship", oh no no, I'm not spending another dime on AMD if I don't first see cuda-class support but with trully open-source stack.
              Unlike Navi3x all Navi2x gpus can run the same code and navi21 is a fully supported and very well tested rocm target so all navi2 gpus already work fine.

              Comment


              • #8
                Originally posted by Mathias View Post
                We were soft-promised (inofficial) support for my trusty old Navi1 GPU. I don't care what the wiki says as long as it has the gfx1010 tensile library (... and that is the last part needed to get pytorch etc running)
                rocblas/tensile git has supported gfx101x for a while now. But not all other rocm libs support gfx101x yet

                Comment


                • #9
                  Originally posted by aerospace View Post
                  Expand the range to at least all RDNA3 models onwards. Optimizations later.
                  All of RDNA3 is supported since version 6.0. I think they added partial support in 5.6. so if you have an RDNA3 card (I do, works wonderfully) then you have something else that's broken on your system as this should just work oit of the box. For reference, I can use stable diffusion and LLMs (ollama) just fine with my 7900XT without installing any special repo forks at all.

                  Comment


                  • #10
                    I would like to see folding@home support on modern radeon graphic cards

                    Comment

                    Working...
                    X