Announcement

Collapse
No announcement yet.

The State Of ROCm For HPC In Early 2021 With CUDA Porting Via HIP, Rewriting With OpenMP

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    article tells sad story of
    1) buy into vendor lock-in
    2) desperately look for solutions

    Comment


    • #12
      Is it possible to use ROCm to run [email protected] with RDNA2-card on Fedora? Do I compile the OpenCL-runtime from https://github.com/RadeonOpenCompute/ROCm-OpenCL-Runtime/tree/rocm-4.0.0 ?

      Comment


      • #13
        Please. For the 100th time:
        • ROCm is not for you, the open-source fan, who happened to have RX580 to run linux desktop and play some games. (Some) things might work, but not out of the box and on any given version of distro and kernel version.
        • ROCm is also not for you, beginner computer science student with AMD hardware (laptop/desktop), wishing to do some neural network curse in Caffe/Tensorflow/Torch. While it might work, you might spend 10x more time preparing working environment compared to your classmates.

        ROCm is targeted for enterprise like LUMI with dedicated workload and developer who will create/port code to work specifically on given hardware.

        ROCm being opensource is just AMD policy, and it's nice. Do not compare it to project like MESA, where multiple hardware (intel/amd/nouveau) and multiple organizations (e.g. AMD, intel, Valve, Collabora, etc) are contributing.
        Last edited by gsedej; 22 February 2021, 05:43 AM. Reason: better wording

        Comment


        • #14
          Originally posted by gsedej View Post
          Please. For the 100th time:
          • ROCm is not for you, the open-source fan, who happened to have RX580 to run linux desktop and play some games. (Some) things might work, but not out of the box and on any given version of distro and kernel version.
          • ROCm is also not for you, beginner computer science student with AMD hardware (laptop/desktop), wishing to do some neural network curse in Caffe/Tensorflow/Torch. While it might work, you might spend 10x more time preparing working environment compared to your classmates.

          ROCm is targeted for enterprise like LUMI with dedicated workload and developer who will create/port code to work specifically on given hardware.

          ROCm being opensource is just AMD policy, and it's nice. Do not compare it to project like MESA, where multiple hardware (intel/amd/nouveau) and multiple organizations (e.g. AMD, intel, Valve, Collabora, etc) are contributing.
          Don't be so condescending. I use CUDA all day long on a Jetson TX2 dev board. I use Torch all day long with my CPU. Many open source packages will use OpenCL if they can. You don't have to be an enterprise guy to want to unlock the GPU compute in your Radeon (and it's a Radeon VII with 3.5 teraflops of FP64 - in case you don't know what that is that's high precision floating point which I need for yield curve optimization, and which even an RTX3080 will only give you 0.5). I sure hope you're not working for AMD because if that's the attitude at ROCm then "Long Live CUDA" because you'll never catch up if ridiculous and insulting arguments like yours have any validity whatsoever.

          I have worked at some of the richest financial shops on earth and I can tell you that before they'll spend hundreds of thousands on a technology, they will first test it on a few machines. ROCm does not inspire any confidence whatsoever. Let me give you an example: It is easy to find CUDA on mainstream cloud instance providers. It is impossible to find ROCm. I rest my case.

          https://aws.amazon.com/marketplace/s...archTerms=cuda

          https://aws.amazon.com/marketplace/s...archTerms=rocm

          Last edited by vegabook; 22 February 2021, 07:04 AM.

          Comment


          • #15
            Originally posted by vegabook View Post

            Don't be so condescending. I use CUDA all day long on a Jetson TX2 dev board. I use Torch all day long with my CPU. Many open source packages will use OpenCL if they can. You don't have to be an enterprise guy to want to unlock the GPU compute in your Radeon (and it's a Radeon VII with 3.5 teraflops of FP64 - in case you don't know what that is that's high precision floating point which I need for yield curve optimization, and which even an RTX3080 will only give you 0.5). I sure hope you're not working for AMD because if that's the attitude at ROCm then "Long Live CUDA" because you'll never catch up if ridiculous and insulting arguments like yours have any validity whatsoever.
            Thanks for replying. I didn't want to sound negative, just "realistic" - general Phoronix user/student/enthusiast should give a good taught when considering using ROCm/Radeon for compute (outside opencl). I was using ROCm almost from day 1 on RX480. While Caffe and Tensorflow did work somehow, I could not recommend colleague like "it just works like your nvidia". My new work PC included rx5700xt (by my wish), since I expected ROCm to be even more user friendly and sable. Even knowing the risk of bad support, i was still disappointed. But it was my risk. Most of my work done on server with Titan gpus, and my desktop is just "terminal", where I develop on top of Tensorflow, but it would be nice to be able to actually run things locally.

            edit: Yes I do realize FP64, or "double float" and its reference to perforance. Nvidia sure does give premium price for their quadro or how the V100 and P100 are called.
            Last edited by gsedej; 22 February 2021, 07:27 AM.

            Comment


            • #16
              Meanwhile still no support for RDNA/RDNA2 in ROCm. What a joke. Too bad Nvidia GPUs can't be bought ATM...

              Comment


              • #17
                Originally posted by gsedej View Post

                Thanks for replying. I didn't want to sound negative, just "realistic" - general Phoronix user/student/enthusiast should give a good taught when considering using ROCm/Radeon for compute (outside opencl). I was using ROCm almost from day 1 on RX480. While Caffe and Tensorflow did work somehow, I could not recommend colleague like "it just works like your nvidia". My new work PC included rx5700xt (by my wish), since I expected ROCm to be even more user friendly and sable. Even knowing the risk of bad support, i was still disappointed. But it was my risk. Most of my work done on server with Titan gpus, and my desktop is just "terminal", where I develop on top of Tensorflow, but it would be nice to be able to actually run things locally.

                edit: Yes I do realize FP64, or "double float" and its reference to perforance. Nvidia sure does give premium price for their quadro or how the V100 and P100 are called.
                And thanks for your reply. Yes it is indeed our risk on ROCm but look, I've been pretty vocal here but I think the stack will come together at some stage. I'm sticking with AMD through thick and thin. I just hope others will too. That said I will very closely watch what Intel is up to in the dGPU space because it seems they have a lot of resource dedicated to software support.

                Comment


                • #18
                  I hope that the transition to ROC isn't as bumpy as I've heard it has been for those that have done it in the past. Then again I've been out of the academic HPC world for a few years already so my impressions may also be somewhat out of date.

                  Also, it's worth remembering that ROC isn't really meant for use in everyday desktop use causes, which a most of the complaints of it are centered around. In academic HPC application developers and users don't maintain the system they're running the workloads on, they submit them remotely to a big machine somewhere using something like SLURM. They may be using some other tool for LUMI, but that's at least what I used back when I did HPC work at university. Not having support for the latest mainline kernels also doesn't really matter when supercomputers generally don't even run off standard mainline kernels.
                  "Why should I want to make anything up? Life's bad enough as it is without wanting to invent any more of it."

                  Comment


                  • #19
                    Originally posted by L_A_G View Post
                    I hope that the transition to ROC isn't as bumpy as I've heard it has been for those that have done it in the past. Then again I've been out of the academic HPC world for a few years already so my impressions may also be somewhat out of date.

                    Also, it's worth remembering that ROC isn't really meant for use in everyday desktop use causes, which a most of the complaints of it are centered around. In academic HPC application developers and users don't maintain the system they're running the workloads on, they submit them remotely to a big machine somewhere using something like SLURM. They may be using some other tool for LUMI, but that's at least what I used back when I did HPC work at university. Not having support for the latest mainline kernels also doesn't really matter when supercomputers generally don't even run off standard mainline kernels.
                    ROCm's attitude of "we don't care desktop use case" is precisely what makes CUDA popular and ROCm/OpenCL unpopular. It's too hard to experiment OpenCL through ROCm in one's own PC. OpenCL was supposed to be portable. Blender has CG render via CUDA/OpenCL. LibreOffice implements OpenCL-assisted spreadsheet calculation. LeelaZero plays Go games in OpenCL (and inspires various small projects to do AI in GPU for various chess type or strategy type games) GPU compute can be huge if AMD really make their OpenCL implementation work out of box. But sadly, no. They are busy chasing something out of reach of personal users and after so many years, their chase seems fail to lure those CUDA-specific frameworks and applications runnable by AMD hardware out of box.

                    Sometimes i wonder, if AMD could focus on making their OpenCL implementation rock solid and work out of box (or at least within reach of simple installation with 100% success rate) to all their APU/GPU, those frameworks that AMD chased hard by creating HIP or OpenMP or whatever would have already ported themselves to OpenCL and users rejoice. The "general purpose" in GPGPU shouldn't be only about academic HPC.

                    Comment


                    • #20
                      Originally posted by pal666 View Post
                      article tells sad story of
                      1) buy into vendor lock-in
                      2) desperately look for solutions
                      you are right... sad story... no one should buy from Fraudulent companies like Nvidia.
                      Phantom circuit Sequence Reducer Dyslexia

                      Comment

                      Working...
                      X