Announcement

Collapse
No announcement yet.

AMD ROCm 6.0 Now Available To Download With MI300 Support, PyTorch FP8 & More AI

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Space Beer View Post
    But "the issue" is, those are officially supported only on Windows, choice of only 90+% of PC users
    Not 90% of deep learning or HPC developers, and that's what counts.

    Comment


    • #12
      In professional environment, you can expert professional setup (Pro GPU, enterprise OS, local support). At home, you are on your own

      Also, CUDA was used for GPGPU programming and GPU acceleration in already existing programs (photo/video editing, rendering, CFD, compression, etc.), not just ML. We need ROCm for such programs (also), for people who are still in learning phase, etc.

      With WSL(2), Microsoft made a big step in linux development environment, so you can have both in one OS. No need for dual-boot and/or multiple systems.
      Last edited by Space Beer; 16 December 2023, 04:18 AM.

      Comment


      • #13
        Very nice Christmas present, AMD!
        I hope ROCm 6 makes it to the PyTorch Nightly repository soon. It is still at 5.7 at the moment:

        Comment


        • #14
          Originally posted by Lycanthropist View Post
          Very nice Christmas present, AMD!
          I hope ROCm 6 makes it to the PyTorch Nightly repository soon. It is still at 5.7 at the moment:
          https://pytorch.org/get-started/locally/
          While it can bring you some headaches, grab https://hub.docker.com/r/rocm/dev-ubuntu-22.04/tags 6.0 complete and build it --> https://github.com/ROCmSoftwarePlatform/pytorch (there are some interesting branches, hint ;-)

          Cheers!

          Comment


          • #15
            Does OpenCL work with Ryzen 9 PRO 7940HS APUs (Phoenix)?
            ## VGA ##
            AMD: X1950XTX, HD3870, HD5870
            Intel: GMA45, HD3000 (Core i5 2500K)

            Comment


            • #16
              I just build llama.cpp with ROCm 6.0 and it runs just fine on my 7900 XT. I think they forgot to update the cards list. Also RDNA2 cards I guess they are still supported.
              What I didn't see, it was the miraculous 2.6X LLMs speed improvement, that was shown a few days ago in the Ai event.
              llama-bench gave me the same values when I compiled & tested it with ROCm 5.7.2, 5.7.3 and 6.0.
              Last edited by bog_dan_ro; 16 December 2023, 12:30 PM.

              Comment


              • #17
                Originally posted by darkbasic View Post
                Does OpenCL work with Ryzen 9 PRO 7940HS APUs (Phoenix)?
                It should at least work with rusticle, should it not?

                Comment


                • #18
                  Originally posted by superm1 View Post
                  Something I want to mention is that this is just the official support stance. It's not necessarily what works.​ There is nothing in the software stack to explicitly exclude any GPU.
                  I wouldn't say nothing explicity excluding any GPUs. Having just built a full stack for Vega10 and Raven Ridge, there are a couple things which come to mind:
                  • rocFFT filters out support at build-time for Vega10 and Polaris.
                  • Composable_Kernel uses inline instructions only introduced in gfx906 without fallbacks. I've actually written fallbacks for this and will create a PR when I get around to it.

                  Other than that, Vega10 (RX Vega64) does work absolutely fine. Better than ever, actually. I've not managed to get Raven Ridge working yet though, I think it's a problem with the kernel driver.

                  For example I can run ROCm related stuff on a mobile 7700S even though it's not in that list.
                  Think about more like "This is what AMD actively tests on and if you have problems they'll be willing to help with them"
                  The problem is a communications failure from AMD. Support to AMD, is active dedicated developer support for specific products sold to customers. "You buy these products and we'll make sure it works for you, and actively worked to solve any issues you may have." It has nothing to do with what most people *think* it means: "Code is in place to work with *these GPUs*, otherwise you're out of luck." That's why people get so annoyed and indignant. "Supported" as a term is just too ambiguous, what's supported? The hardware? The software? Certain cards, or families? No, what AMD means is *customers*: You will get "support" if you own these products!
                  Last edited by s_j_newbury; 16 December 2023, 05:08 PM.

                  Comment


                  • #19
                    Originally posted by s_j_newbury View Post
                    I wouldn't say nothing explicity excluding any GPUs. Having just built a full stack for Vega10 and Raven Ridge, there are a couple things which come to mind:
                    • rocFFT filters out support at build-time for Vega10 and Polaris.
                    • Composable_Kernel uses inline instructions only introduced in gfx906 without fallbacks. I've actually written fallbacks for this and will create a PR when I get around to it.

                    Other than that, Vega10 (RX Vega64) does work absolutely fine. Better than ever, actually. I've not managed to get Raven Ridge working yet though, I think it's a problem with the kernel driver.
                    Thanks for pointing these two out; I wasn't aware of them.


                    Originally posted by s_j_newbury View Post
                    The problem is a communications failure from AMD. Support to AMD, is active dedicated developer support for specific products sold to customers. "You buy these products and we'll make sure it works for you, and actively worked to solve any issues you may have." It has nothing to do with what most people *think* it means: "Code is in place to work with *these GPUs*, otherwise you're out of luck." That's why people get so annoyed and indignant. "Supported" as a term is just too ambiguous, what's supported? The hardware? The software? Certain cards, or families? No, what AMD means is *customers*: You will get "support" if you own these products!
                    You're completely right, it's poor messaging and causing the confusion.

                    As mentioned earlier in the thread by bridgman:
                    Yep... I'm trying to get two changes implemented:

                    #1 - distinguish between "not tested" and "not supported in the code" as everyone has suggested

                    #2 - for "not tested" parts do some kind of periodic testing so every part at least gets covered once during a release cycle even if not final QA

                    There is other work already going on to increase the breadth of supported hardware - the points above are just for chips/boards that still don't fit into the "tested at all points in the development cycle including final QA" coverage that we require to call something supported.​

                    Comment


                    • #20
                      Originally posted by ms178 View Post

                      Dropping Vega support is also a big regression in that regard, IMHO.
                      a lot of apus are going to be exluded by this.

                      Comment

                      Working...
                      X