Announcement

Collapse
No announcement yet.

AMD Talks Up Open-Source Software For AI, Introduces Instinct MI300X

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by luno View Post

    if you use nix it has rocm and hip packages
    This isn't about Arch or whatever distro already has packages. It has to be easy to install for most people, support most hardware (at least RDNA1+ incl. Laptop) and most software has to support it. I knew what I bought, but it is still embarrassing to talk to friends when they tell me about the latest amazing AI upscaling, NERF, LLM, ... they tried or their blender performance on their Laptop and the only thing I can answer is, "but I preferred open source drivers".

    The situation is kinda:

    AMD: good gaming performance (excluding RT), good OOTB gaming experience on Linux, compute is "DIY-best-effort"
    Nvidia: good to amazing performance for games, every compute workload is supported, needs manual driver installation on Linux

    *edit* I have some hopes this will finally be at an acceptable level at the end of the year.

    Comment


    • #22
      Originally posted by Mathias View Post

      This isn't about Arch or whatever distro already has packages. It has to be easy to install for most people, support most hardware (at least RDNA1+ incl. Laptop) and most software has to support it. I knew what I bought, but it is still embarrassing to talk to friends when they tell me about the latest amazing AI upscaling, NERF, LLM, ... they tried or their blender performance on their Laptop and the only thing I can answer is, "but I preferred open source drivers".

      The situation is kinda:

      AMD: good gaming performance (excluding RT), good OOTB gaming experience on Linux, compute is "DIY-best-effort"
      Nvidia: good to amazing performance for games, every compute workload is supported, needs manual driver installation on Linux

      *edit* I have some hopes this will finally be at an acceptable level at the end of the year.
      ROCm provides install support for Ubuntu, RedHat and Suse (binary install via package manager):


      Fedora has the ROCm rpms listed in its pakages:
      View rocm-runtime in the Fedora package repositories. rocm-runtime: ROCm Runtime Library


      Yes AMD currently does not support laptop GPUs with ROCm, but in reality if you look at AI, a mid range laptop GPU is not that faster than its CPU, and all AI python libraries run fine on CPU.

      One element that makes AMD far superior to NVIDIA in linux is it's integration with the full stack:
      * Kernel (amdgpu)
      * drivers (MESA radeonSI and RADV, ...)
      * Wayland
      * ....

      and ease of support (current and future) within the stack-- NVIDIA is big time out of the game

      Comment


      • #23
        Originally posted by clementhk View Post

        PyTorch 2.0 with Triton doesn't need custom PyTorch anymore.
        The rocm-hip-setup.exe that Mathias mentioned is still missing. At the very least it's not clear on the stable / nightly download options on the website.

        Comment


        • #24
          Originally posted by Grinness View Post
          Yes AMD currently does not support laptop GPUs with ROCm, but in reality if you look at AI, a mid range laptop GPU is not that faster than its CPU, and all AI python libraries run fine on CPU.
          I don't know how that translates to AI performance, but my Laptop has 1.2TFlop Single+Int and 2.4TFlop half precision GPU capabilities (clpeak). POCL clocks in at 77GFlop Single and up to 270 Gflop Int. If I multiply Baseclock x Cores x 256(AVX2) /32 (single)x2(FMADD) I get 316GFlop. That is still 4x Real world GPU vs theoretical max CPU. If my tests are not wrong, GPU also uses a little less Power (20.7W vs 18.6W, ~5-6W idle) Maybe I also did some calculations wrong.

          Years ago (~6?) I tried stuff on my Intel and the iGPU was faster on some stuff. Also some stuff didn't even run on the CPU. And still some stuff only works on Cuda. So if I can run something as fast as the CPU could, but can't because it is GPU only, I still prefere the GPU if that allows me to run it at all.

          Comment


          • #25
            Originally posted by Mathias View Post

            I don't know how that translates to AI performance, but my Laptop has 1.2TFlop Single+Int and 2.4TFlop half precision GPU capabilities (clpeak). POCL clocks in at 77GFlop Single and up to 270 Gflop Int. If I multiply Baseclock x Cores x 256(AVX2) /32 (single)x2(FMADD) I get 316GFlop. That is still 4x Real world GPU vs theoretical max CPU. If my tests are not wrong, GPU also uses a little less Power (20.7W vs 18.6W, ~5-6W idle) Maybe I also did some calculations wrong.

            Years ago (~6?) I tried stuff on my Intel and the iGPU was faster on some stuff. Also some stuff didn't even run on the CPU. And still some stuff only works on Cuda. So if I can run something as fast as the CPU could, but can't because it is GPU only, I still prefere the GPU if that allows me to run it at all.
            How much memory does your GPU have (and how much the CPU-- aka system ram)?

            4x times is good (albeit only nominal), but depending on GPU ram you will be limited on the scale of algorithms that you can run.

            In reality, on a laptop gpu you can run toy deep-learning examples (unless you have 8GB+ -- and some people would still consider 8GB no more than for toy apps)
            Note also that the performance difference GPU vs CPU is mostly in training, in testing CPU is fast enough (unless you need realtime++)

            Edit: Note that in the above I refer to deep-learning/ann algorithms. For traditional machine learning (e.g. random forests, kNN, PCA, SVD, GMM, etc) CPU and python + scikit-learn are great for learning and understanding basic concepts (maths/statistics) that apply also to the bigger brothers)
            Last edited by Grinness; 14 June 2023, 01:19 PM.

            Comment


            • #26
              Originally posted by Jabberwocky View Post

              The rocm-hip-setup.exe that Mathias mentioned is still missing. At the very least it's not clear on the stable / nightly download options on the website.
              I am not sure why you want a '.exe' -- this work in Windows.
              In linux rocm-hip is provided by the package manager, e.g:



              or:

              Comment


              • #27
                Originally posted by Mathias View Post
                A quick test on my laptop (Ryzen5 6600 HS, integrated RDNA2 GPU) it installs easily, rocminfo shows stuff, Blender finds the device but segfaults on usage. I remember Rocm not being particularly well supported on laptops(?). I will try later on my desktop...

                (If you say: Laptops aren't supported, add that to my list or arguments.)
                Your laptop GPU is gfx1035 (Rembrandt). Debian maintains a list of supported GPUs, though it may be different from Fedora's. I'm working on expanding the supported hardware and I've had some success with other architectures such as gfx1010 and gfx1031. You might have some luck with the workaround documented on the Debian support page for gfx1035, but I've never tested that particular hardware myself so I can't promise anything.

                In any case, if you encountered a segfault, I would recommend filing a bug against the package.

                Originally posted by Mathias View Post
                *edit* I think I may want more then rocm-hip. I think hipblas is still missing (and all the other hip-libraries).
                A hipblas package is pending on Debian. On Fedora it will probably take a while, as the HIP runtime was packaged only a couple weeks ago and rocblas will take some time to wrangle.

                Comment


                • #28
                  Originally posted by dragorth View Post

                  Is it AMD or Intel that is running a customized version of Minix for their CPU in a CPU?
                  Intel

                  Comment


                  • #29
                    Originally posted by Mathias View Post

                    This isn't about Arch or whatever distro already has packages. It has to be easy to install for most people, support most hardware (at least RDNA1+ incl. Laptop) and most software has to support it. I knew what I bought, but it is still embarrassing to talk to friends when they tell me about the latest amazing AI upscaling, NERF, LLM, ... they tried or their blender performance on their Laptop and the only thing I can answer is, "but I preferred open source drivers".

                    The situation is kinda:

                    AMD: good gaming performance (excluding RT), good OOTB gaming experience on Linux, compute is "DIY-best-effort"
                    Nvidia: good to amazing performance for games, every compute workload is supported, needs manual driver installation on Linux

                    *edit* I have some hopes this will finally be at an acceptable level at the end of the year.
                    yeah I know, Nvidia is always better in software ecosystem

                    Comment

                    Working...
                    X