Announcement

Collapse
No announcement yet.

AMD Preparing ROCm 6.1 For Release With New Features

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by NeoMorpheus View Post

    Check with Dear Leader Jensen why he didnt make cuda open source instead of being locked to his hardware, since zluda has proven that it’s artificial lock-in.
    Though hey, ChatGPT (Copilot) thinks that CUDA does not meet the new White House security guidelines for memory safe languages… so maybe something new and open source can be built

    Comment


    • #22
      Originally posted by davide445 View Post
      I'm interested in ROCm as alternative to NV+CUDA since a long time. 2022 got access to ACP AMD internal cloud to test PyTorch scalability for a project on Mi250X, working flawlessly.

      I have searched and discovered AI platforms companies starting to adopt ROCm + AMD accelerators, and I'm in contact with them just for this reason.

      I didn't want to criticize any personal ideas, just adding my opinion after reading since *years* people complaining always on the same topic on every ROCm release.

      AI margins are huge so far due also AI hardware is supply constrained. Why a follower like AMD will be willing to compromise his hard earned margins, allocating resources to validate his AI enabling sw stack so to promote low margin hardware, like validating and supporting past generation/consumer GPU? To push higher the request for silicon that will compromise the tight supply for Nx more valuable Professional/Enterprise segment? This will just make the game of his competitors, suicidal for every company. Cash in now if you can, so you can invest later for volumes-driven markets.

      ​​I will prefer being able to run flawlessly a cheap RX 7800 for experiments? Run in prod a W7800 Pro? Of course! But so far I just tested on the only AMD GPU cloud I was able to find, and asking that professional companies for development/production.

      I NEED a local GPU for ML/AI tech evaluation? I have an NV one. It's just reasonable due less troubleshooting. If I want AMD hw I can get it cheap and unsupported, with the right skills I'm sure will work even if with some limitations.

      AMD will be on par up sooner or later on the software side, with the right strategy and execution. Has already progressed a lot.
      LOL!!!!!! Joking, right?

      I think AMD only cared about gaming - consoles in particular, desktop to some extent (Windows) and mostly their processors - both consumer desktop and workstation/threadripper etc. - the software side for gpus - is an afterthought except for gaming - but, with the development with AI - now, I think they are starting to take it seriously - at least, as it pertains to AI/ML. The fanboys who shrug off the significance of AI are in a coma. AMD is trying to catch up but they've been so behind since they neglected this area/sphere for so long. It's pretty disappointing.

      Comment


      • #23
        Originally posted by Panix View Post

        LOL!!!!!! Joking, right?

        I think AMD only cared about gaming - consoles in particular, desktop to some extent (Windows) and mostly their processors - both consumer desktop and workstation/threadripper etc. - the software side for gpus - is an afterthought except for gaming - but, with the development with AI - now, I think they are starting to take it seriously - at least, as it pertains to AI/ML. The fanboys who shrug off the significance of AI are in a coma. AMD is trying to catch up but they've been so behind since they neglected this area/sphere for so long. It's pretty disappointing.
        This.

        AMD probably thought it will be nice idea to split architectures and create CDNA (with matrix cores) and crippled RDNA2 for gaming, back in 2019.
        But AI boom must have surprised them a lot, now everyone wants to run Ollama and Stable Diffusion and Radeon looked pretty poor in this regard.
        So they re-wired WMMA to RDNA3 back, in panic mode, and try to show their GPUs in good light, as AI capable.

        They did absolutely fabulous thing with FSR (which is, agan, for gaming, and without it they would be dead in the water already), but they need to rework their compute strategy fast.
        The problem is that RTX 2000 series, not-so-expensive consumer cards that had proper tensor cores, and proper ray tracing cores, is almost 6 years old now. And according to latest news, RDNA4 is going to be lackluster generation. I guess they realized that they need more time to completely rework RDNA.
        Last edited by sobrus; 02 March 2024, 12:16 PM.

        Comment

        Working...
        X