If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.
Announcement
Collapse
No announcement yet.
AMD Extends PyTorch + ROCm Support To The Radeon RX 7900 XT
Issue description nix-env uses so much memory that it fails on allocation when querying packages; far more than should be for a simple package manager. Steps to reproduce On a system with 2gb or le...
my Vega64 runs perfectly fine with ROCm/HIP in Fedora 39 with PyTorch.
and you can still enable the Polaris cards to. even to the fact that Vega did to into legancy mode at AMD they did not remove the support in ROCm or Pytorch.
also remember it is opensource if something does not work right you can fix it.
most of polaris cards are out of range of any usefullness with ML/AI workloads because they only have 4GB of vram.
but there are some 16gb models and of course the 8gb models.
if you do not use the AMD.com ROCm and you use distro ROCm they people there are 3 big groups: Debian,Arch,Fedora they do insane stuff on their version of ROCm some of them even enable Polaris by default.
I don't buy an AMD graphics card and hope, everything works. And Open Source is no excuse, that I have to fix AMD's problems.
I knew the moment I saw "Ubuntu 22.04.3" that Linux geeks would pull out pitchforks in the comments with the usual "bla bla bla Snaps bla bla bla my distro of choice" . *sigh* Talk about missing the point.
These are good steps by AMD to not be left behind in the AI race. However it's not enough but my guess is they are saving the big news for their AI event and not a random blog post.
Good steps in right direction, but more is needed. I own a 5700XT, I spent 6 months trying to get PyTorch to run on it (I did get it running, barely, with crashes, only specific version combo, only on Ubuntu and not on Debian). Now I finally gave up as ROCm is too much hassle and rented a server with an NVidia card. No problems so far.
Honestly, I will still run and keep buying AMD GPUs, but GPU compute on AMD side, especially on consumer cards today is a shitshow. Yes, there is potential, yes maybe specific models or enterprise cards are supported, but it's just so much easier and hassle free to go NVidia it's not funny...
Honest answer: Because today's hobby programmers experimenting with OpenCL/MP/CUDA are tomorrow's scientists writing software for HPC clusters and compute workstations. This is what Nvidia recognized and turned into the GPGPU juggernaut they are today and why CUDA has a near unassailable stranglehold on the GPGPU market. It's the same realization that Intel is using to target lower end GPGPUs at a value point hoping to win over that very same hobby and enthusiast crowd... and leverage them into the same horde of programmers that destroyed Big Iron and proprietary Unix. Will they be successful? *shrugs* We'll see. They still have to release a high end GPGPU to match the Nvidia H series to go with their DPUs if they want to ultimately be successful in the data centers.
If you look at the support matrix for ROCm on AMD's own website, they only mention the most expensive GPUs they produce are supported (not even the RX 7800*) and only a narrow few distributions or Windows. That is NOT how you win programmers. It's how you maintain existing enterprise customers, what few they have for GPGPU, but it's not how you encourage notoriously broke college students to take you seriously who are tomorrow's line hackers. Nvidia's and Intel's solutions both support their entire range of GPU hardware and with Intel scaling across both their GPGPU line + CPU line. With Nvidia that's always been true of CUDA. The curious could always play with writing GPGPU code whether they had a low end card or the top of the line latest gen data center card. They're getting it right. AMD is bungling it as they usually have with half-assed GPGPU support officially only for their most expensive cards without a broad range of OS support.
To have reliable cross generational GPGPU support for AMD the only real option right now is RustiCL and that's not even from AMD.
*nod* I jumped on a Cyber Monday deal for an RTX 3060 because the things that prompted me to upgrade off "GTX750 from 2014 on my Linux box and a hand-me-down high-end Radeon from 2009 in my gaming box" were CUDA-related. I'd have stayed on my 2014 GPU indefinitely otherwise.
At this rate, I'm likely to run an AMD CPU and an Intel GPU in my next build.
Comment