Announcement

Collapse
No announcement yet.

AMD Details The MI300X & MI300A, Announces ROCm 6.0 Software

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by coder View Post
    Then you should be well aware that the math has to add up, in order for a decision to make business sense. Without any of the data, how you can believe you know what they should do is beyond me.


    Apparently not GPUs, then.


    She's probably referring to the way chip design has such a long lead time. From the point of initial development until fist shipment to customers, modern CPU and GPUs can take up to 4 years! You have to make predictions about costs, demand, and competition. That's what makes it a gamble. In order to be reasonably successful, you need to do a really good job of controlling and modelling as many factors as possible, as well as eliminating unnecessary risks.

    What she's definitely not saying is that they just make a blind guess.


    AMD certainly made some missteps on their path to GPU Compute & AI, but it's hard to say exactly what they should've done and when, without knowing what kinds of budgets they had to work with. You must be mindful of the fact that they were barely keeping the lights on, in the years just before & after Zen. They had also lost some talent, through layoffs and attrition. It takes time to build capacity.

    I disliked how AMD pivoted towards HIP, just as it seemed they were finally getting ROCm in shape. I'd have much rather seen them continue to stabilize ROCm, get it nicely packaged and integrated into more distros, ported to their entire hardware range, and get caught up on their OpenCL support. Those are my selfish wishes. I can't say that would've best positioned them for their AI or HPC objectives, however.


    Having just invested so much in HIP, I don't see that happening. In fact, I'm sure AMD would rather see HIP running on Intel hardware than oneAPI running on AMD hardware.
    Missteps? AMD is a major disaster within GPU Compute - HIP-RT is still 'experimental' in Blender. It's not working in Linux, afaik. They dropped the ball there entirely in that entire sphere and yes, they will invest in AI - many companies are doing so, though. They have no choice and I bet most of their investments will go there rather than other software areas like GPU Compute.

    Comment


    • #32
      Originally posted by Panix View Post
      Missteps? AMD is a major disaster within GPU Compute - HIP-RT is still 'experimental' in Blender.
      Blender is an interesting case, because its developers are reportedly Nvidia fanboys who chronically neglected the OpenCL backend. HIP was basically the best AMD could do for a situation like that, without straight-up cloning CUDA, which they probably worried would make them vulnerable to copyright infringement.

      I'm not denying that AMD's GPU Compute situation is bad. The only point of contention is exactly what they should've done, instead. From the outside, I think it's impossible to know the exact dimensions of the solution space they were working within. That's not to say there was nothing they could've done better, but I just don't think we're well positioned to say exactly what they realistically could've done better or differently.

      I'm not making excuses or saying you shouldn't complain. It's just the "Monday morning quarterbacking" that bothers me.

      Comment


      • #33
        Originally posted by coder View Post
        Blender is an interesting case, because its developers are reportedly Nvidia fanboys who chronically neglected the OpenCL backend. HIP was basically the best AMD could do for a situation like that, without straight-up cloning CUDA, which they probably worried would make them vulnerable to copyright infringement.

        I'm not denying that AMD's GPU Compute situation is bad. The only point of contention is exactly what they should've done, instead. From the outside, I think it's impossible to know the exact dimensions of the solution space they were working within. That's not to say there was nothing they could've done better, but I just don't think we're well positioned to say exactly what they realistically could've done better or differently.

        I'm not making excuses or saying you shouldn't complain. It's just the "Monday morning quarterbacking" that bothers me.
        I totally understand they couldn't clone or copy CUDA. But, they have had the OpenCL 'problem' for years, apparently. I've read some decent 'rants' on the OpenCL/AMD thing - but, I just can't remember all of it. All I know is it's going back many years - at least 5 or more.


        Hello, did amd removed opencl for hawaii gpus? wikipedia says it supports opencl in 2.0+. i can remember that i used it someday. now i wanted to use opencl for adobe premiere pro again and was very dissappointed. NO OPENCL WTF AMD? how to i get those opencl drivers? amd app sdk? gone newest drive...


        I understand it's been problematic or 'broken' for years and the investment/priority has been extremely lacking - and the lack of focus or addressing the problem has been demonstrated in fields like Blender and other areas that require Compute (and the OpenCL backend). They switched to HIP and that looks like a similar situation - perhaps, a little bit of progress but REALLY slow and mediocre performance when using it - and anything related - i.e. ray tracing library.

        Comment


        • #34
          Originally posted by Panix View Post
          I totally understand they couldn't clone or copy CUDA.
          Let's be clear, though: HIP is basically a CUDA clone. They just changed all of the names, to avoid getting slapped with copyright infringement. However, that + being slightly out-of-sync with CUDA, is enough to force a separate Blender backend for HIP. And that translates into more maintenance burden that I'm guessing falls squarely on AMD, thus denying them some key benefits you'd want from making a CUDA clone.

          AMD claims you can write HIP code and run it on Nvidia hardware, though I'm sure more bugs, less performance, and fewer features are available via that route than native CUDA code. I'm guessing HIP's Nvidia support exists only as a way to claim you're not locked-in, with HIP, the way you are with CUDA. In practice, I don't really foresee anyone switching over CUDA codebases to use HIP, instead.

          Comment


          • #35
            Originally posted by coder View Post
            Let's be clear, though: HIP is basically a CUDA clone. They just changed all of the names, to avoid getting slapped with copyright infringement. However, that + being slightly out-of-sync with CUDA, is enough to force a separate Blender backend for HIP. And that translates into more maintenance burden that I'm guessing falls squarely on AMD, thus denying them some key benefits you'd want from making a CUDA clone.

            AMD claims you can write HIP code and run it on Nvidia hardware, though I'm sure more bugs, less performance, and fewer features are available via that route than native CUDA code. I'm guessing HIP's Nvidia support exists only as a way to claim you're not locked-in, with HIP, the way you are with CUDA. In practice, I don't really foresee anyone switching over CUDA codebases to use HIP, instead.
            I heard of that before - the response was, not happening as CUDA is the superior tech - it's been developed for how many years now?

            My point, before - is that HIP is still a WIP and they still haven't fixed the RT aka ray tracing acceleration in Blender - which would be the ONLY reason to even consider an AMD card for that. Nvidia is evil but most ppl still use it for whatever tasks - in these fields - GPU COMPUTE, Blender, ML/AI etc. etc. - the only reason ppl are buying AMD cards right now is the price is a bit less and in the Linux world - for gaming - since, there's (supposedly) less issues to deal with - but, it's my impression that it's mostly ppl who don't do much with their PC or haven't done much in Linux other than gaming - I guess the community here has a lot of those?

            Because, when you try to do more than just gaming in Linux - you encounter stress and problems - ROCm, getting that configured w/ HIP is often a hassle - and it's my understanding that OpenGL can be implemented with open source components but NOT OpenCL - which is where AMD users get into trouble - trying to 'add' proprietary components into the FOSS world of their AMD software/files.... but, correct me if I'm mistaken there? I don't think so....

            There's a lot of examples of ppl who wanted to use AMD hardware - but, had to switch to evil Nvidia hardware - even if they have to spend a lot more - either on the specific gpu - which might have less vram - or had to SPEND A LOT MORE - because, they need the vram and only Nvidia's expensive cards have more vram.....but, they did this because the AMD ROCm/HIP components they have to use - is a nightmare to install and configure - because a good portion of it is not included in the free/open source ecosystem - Linux gamers who don't use these programs have no idea or don't care..... Heck, AMD doesn't care, either.

            Comment


            • #36
              Panix coder There was a great interview with an AI researcher on Moore's Law is Dead today touching on this very subject with regards to AMD/Nvidia/Intel and their compute / AI offerings: .

              tl,dr: Some well known problems with ROCm need to be fixed in order to get more traction (e.g. hardware / capability fragmentation on Linux/Windows; usability problems that Panix also mentioned); also Ryzen AI is Windows-only now and not available on Linux (AMD needs a more complete support for their features to make it appealing for developers to use them). The interview gets also in more detail about the other companies.

              Comment


              • #37
                Originally posted by ms178 View Post
                Panix coder There was a great interview with an AI researcher on Moore's Law is Dead today touching on this very subject with regards to AMD/Nvidia/Intel and their compute / AI offerings: .

                tl,dr: Some well known problems with ROCm need to be fixed in order to get more traction (e.g. hardware / capability fragmentation on Linux/Windows; usability problems that Panix also mentioned); also Ryzen AI is Windows-only now and not available on Linux (AMD needs a more complete support for their features to make it appealing for developers to use them). The interview gets also in more detail about the other companies.
                Thanks, looks really interesting, ms178. Pretty long video, though. :-) I just want to mention (briefly), that I think, ultimately, that AI is a bad thing. It's already big but will be huge - but, I think we will look back at it - and say, 'damn...' :-(

                Also, I really prefer 'getting an AMD gpu' - for Linux but to use the software I wish to use - however, it's apparently not a good match - video editing, 3D Modeling/GPU Compute - with AI/ML being something I want to look at but the first two I will concentrate, the most. Gaming is something I dabble in - and I already know it's sufficient w/ AMD. Or so, I think.

                But, I find it difficult to seriously pick an AMD gpu over a Nvidia one - examining all this.... the video should be helpful and informative.

                Comment


                • #38
                  Originally posted by Panix View Post

                  Thanks, looks really interesting, ms178. Pretty long video, though. :-) I just want to mention (briefly), that I think, ultimately, that AI is a bad thing. It's already big but will be huge - but, I think we will look back at it - and say, 'damn...' :-(
                  Yeah, you can run it as a podcast in the background though, I found the unique perspective quite informative.

                  Originally posted by Panix View Post
                  Also, I really prefer 'getting an AMD gpu' - for Linux but to use the software I wish to use - however, it's apparently not a good match - video editing, 3D Modeling/GPU Compute - with AI/ML being something I want to look at but the first two I will concentrate, the most. Gaming is something I dabble in - and I already know it's sufficient w/ AMD. Or so, I think.

                  But, I find it difficult to seriously pick an AMD gpu over a Nvidia one - examining all this.... the video should be helpful and informative.
                  It is a perfectly valid reason to look at what works best on your specific workloads today, there might be no alternative to Nvidia in your case if you need to purchase in the timespan of this generation. But it is a terrible market situation to be in as Nvidia milks all these people, but they are the undisputed leader in specific areas that might be important to you. But I am not following these workloads too closely myself. AMD has still a lot of catching up to do on the software side which needs much more time, unfortunately. They also need to get buy-in from the software vendors which might be harder if their target audience is running on Nvidia anyways. Intel might be a good contender long-term but their first GPU generation is still a beta product and also needs much more maturing.

                  Comment


                  • #39
                    Originally posted by ms178 View Post

                    Yeah, you can run it as a podcast in the background though, I found the unique perspective quite informative.

                    It is a perfectly valid reason to look at what works best on your specific workloads today, there might be no alternative to Nvidia in your case if you need to purchase in the timespan of this generation. But it is a terrible market situation to be in as Nvidia milks all these people, but they are the undisputed leader in specific areas that might be important to you. But I am not following these workloads too closely myself. AMD has still a lot of catching up to do on the software side which needs much more time, unfortunately. They also need to get buy-in from the software vendors which might be harder if their target audience is running on Nvidia anyways. Intel might be a good contender long-term but their first GPU generation is still a beta product and also needs much more maturing.
                    I'll probably watch some of it when I have time, today.

                    Yes, it's a terrible market situation in general but especially regarding Nvidia hardware these days - I did buy one card but I sold it and then bought another used gpu - but that one was used - although, I sold that one as well. The latest was a 3080 10gb - I want more vram. I have an older card at the moment that I'm borrowing until I save enough for a higher tier card. A 1660 Ti - but, it is only 6gb. So, I plan on seeing what experience I get with Fedora, Ubuntu, Debian, OpenSUSE etc. - and in Wayland. That should be fun?!?

                    I'm not in a rush but I am worried about availability/price of gpus in the next while - with AI taking off and all sorts of market surprises (see China) - the last time I 'waited' - crypto crazziness happened. I think Intel is still a ways off before really becoming a viable(?) alternative to the other big two. At least, that's my impression so far.

                    For now, I'm keeping an eye on how the 7900 series shapes up in being a potential option in these specific areas - or should I just hunt for a used 3090 or something?

                    Comment


                    • #40
                      Originally posted by Panix View Post
                      For now, I'm keeping an eye on how the 7900 series shapes up in being a potential option in these specific areas - or should I just hunt for a used 3090 or something?
                      FWIW, I've been reading that China seems to be exhibiting strong demand for both. Yes, even used RXT 3090's, which they're remanufacturing into 2-slot cards with a blower-style cooler and probably double the RAM.

                      Comment

                      Working...
                      X