Announcement

Collapse
No announcement yet.

AMD Releases ROCm 5.7 GPU Compute Stack

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • vegabook
    replied
    Originally posted by yump View Post
    Don't judge. AMD's GPU compute customer is probably quite happy with this release.
    lol I see what you did there.

    Leave a comment:


  • yump
    replied
    Don't judge. AMD's GPU compute customer is probably quite happy with this release.

    Leave a comment:


  • geerge
    replied
    Dropping MI50 are they off their rocker? Guess that means all Vega are on the way out. I take it RDNA is still in a piss poor shape. So the remaining cards that are well supported would be somewhere between bugger all and a trick of the light? I officially no longer care, they can't do a damn thing right.

    Leave a comment:


  • vegabook
    replied
    Funny thing is, the best alternative to Cuda is turning out to be Metal. Yeah yeah it's closed it's apple yada. But at this stage AMDs stack is _worse_ than closed. "Open, but unusable for less than 10 grand". So basically open-source gaslighting. Frankly I prefer "closed but affordable", not to mention pretty much works out the box.

    You can fairly painlessly get ML models working quite well on it with all the major libraries supporting it (Tensorflow, Pytorch, even Jax). The Metal documentation is not bad at all. And the unified memory means that if you spend 3 grand or so on a 64GB ram Studio, you got the whole 64GB of RAM available to GPU making things like full fat Llama70b available without all the dual 3090/4090 shenanigans and quantization. Now it will run slower, sure, about 4x slower by my calc, but it's perfectly usable and great for fairly advanced proof-of-concept onsite, without having to spend a fortune on cloud instances.

    Not writing off Intel yet btw. Lots of potential but it's not here yet.
    Last edited by vegabook; 17 September 2023, 09:19 AM.

    Leave a comment:


  • andre30correia
    replied
    my apu stop working with this realease, amd should invest more in this cuda alternative

    Leave a comment:


  • JEBjames
    replied
    Michael

    Duplicated text...

    This block of text is repeated in the article.

    "AMD's release notes do note that ROCm 6.0 will be a breaking release that isn't backwards compatible with ROCm 5.x. ROCm 6.0 will split up the LLVM packages into more manageable sizes, there will be changes to the HIP Runtime API, rocRAND and hipRAND will be split into separate packages, and other fundamental changes."

    Leave a comment:


  • darkbasic
    replied
    Hopefully ROCm 6 will start working on non-x86 architectures as well: https://github.com/RadeonOpenCompute...ime/issues/158

    Leave a comment:


  • Lycanthropist
    replied
    PyTorch Nightly works fine here with ROCm 5.6 on a 7900 XTX. E.g. Stable Diffusion inference and training.

    Leave a comment:


  • Zyten
    replied
    I also feel quite disappointed in the support of GPUs in ROCm. I‘ll probably have access to a RX 7700 XT soon and I‘ll test if and what parts of ROCm actually work, but with the current official support AMD is miles behind offers from NVIDIA.

    I really hope that they’ll catch up soon. I mean, some
    parts of ROCm seem to support RX 7000 for some time now. However, the documentation for that is extremely sparse and most information has to be gathered from external sources.

    Leave a comment:


  • osw89
    replied
    Originally posted by david-nk View Post
    I already bit the bullet and got a RTX 4090 despite at least the AMD graphics side of the drivers being vastly superior to Nvidia's, but no ML support is obviously a no-go.
    ML support isn't missing, official support is. You can still use ROCm if you happen to have something that uses the same chips as one of the few officially supported workstation cards. I have been using ROCm with my RX6800 for months and haven't really had any problems. Don't get me wrong the situation is still atrocious but not as bad as people seem to think. It works on ~15 cards instead of 3 .
    Last edited by osw89; 16 September 2023, 09:02 PM.

    Leave a comment:

Working...
X