Announcement

Collapse
No announcement yet.

AMD ROCm + PyTorch Now Supported With The Radeon RX 7900 XTX

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • AMD ROCm + PyTorch Now Supported With The Radeon RX 7900 XTX

    Phoronix: AMD ROCm + PyTorch Now Supported With The Radeon RX 7900 XTX

    While Friday's release of ROCm 5.7.1 hadn't mentioned any Radeon family GPU support besides the aging Radeon VII, it turns out AMD's newest open-source GPU compute stack is ready to go now with the Radeon RX 7900 XTX and is complete with working PyTorch support...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Cool!
    But what about the other RDNA 3 GPUs, does compute tasks can be performed on them?

    Comment


    • #3
      I do hope that they meant Navi 31. If they seriously make it run on an XTX but not my XT, I'm gonna be mad.
      Fat chance that it wouldn't work on both though.

      Comment


      • #4
        Originally posted by Danny3 View Post
        Cool!
        But what about the other RDNA 3 GPUs, does compute tasks can be performed on them?
        Short answer: if you're lucky.

        Long answer: they do enable them to an extent, but you may have to set some flags, do some research, trifle with some things, and some workloads may work, some may not.
        The clear goal for AMD here is to enable people like George Hotz to run ML on RDNA 3 hardware's best compute options. That's the XTX by a lot. The real problem with AI BTW is that it's not so much about your chip's performance than your memory bandwidth. The XTX has a massive 960Go/s throughput, while the say comparatively much cheaper & still good 7800 xt is at 624Go/s. Still great, but probably AMD doesn't want people to rush for the midrange GPUs to get their AI running, they prefer margins & to be fair, a 250W 7800 xt isn't quite as interesting as a 350W XTX I think.

        Edit: FYI the monstrous 4090 has a throughput of "only" 1To/s, 1008Go/s to be exact.
        Amazing of course, but also offering absolutely no valid argument for buying it at $1600 when you can get an XTX for $1000, if your game is to run ML models that fit in a 24Go VRAM buffer. Which with LoRA is already a lot of models...

        Edit 2: Also you can always run a Vulkan stack and run compute on any AMD or Nvidia GPU, of course. Nod.AI has been running their SHARK stuff on Vulkan for years. Works all the way back to Navi 10 I believe.
        Last edited by Mahboi; 16 October 2023, 02:48 PM.

        Comment


        • #5
          Oh my god, yes! Finally! Now to figure out how to make everything work on Gentoo 😅

          Comment


          • #6
            Originally posted by Mahboi View Post

            Short answer: if you're lucky.

            Long answer: they do enable them to an extent, but you may have to set some flags, do some research, trifle with some things, and some workloads may work, some may not.
            The clear goal for AMD here is to enable people like George Hotz to run ML on RDNA 3 hardware's best compute options. That's the XTX by a lot.
            Didn't geohot wanna write his own driver bypassing ROCm or something like that?

            Comment


            • #7
              Originally posted by LtdJorge View Post
              Oh my god, yes! Finally! Now to figure out how to make everything work on Gentoo 😅
              That won't be the hard part fortunately.

              I wonder if OpenCL is working on RDNA3 APUs like Phoenix.
              ## VGA ##
              AMD: X1950XTX, HD3870, HD5870
              Intel: GMA45, HD3000 (Core i5 2500K)

              Comment


              • #8
                Not news to me. It has been working with my 7900 XTX since ROCm 5.6 for me.

                Comment


                • #9
                  Is there an up-to-date installation guide for Debian? There are few rocm-* packages in the official repository but I never found an explanation how to properly setup it.

                  Comment


                  • #10
                    Originally posted by LtdJorge View Post

                    Didn't geohot wanna write his own driver bypassing ROCm or something like that?
                    I'm pretty sure yes, but he also made a big old childish tantrum about the ROCm spec being bad/buggy so I expect his solution is some sort of hybrid.

                    Comment

                    Working...
                    X