Announcement

Collapse
No announcement yet.

Radeon ROCm 5.0 Released With Some RDNA2 GPU Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by Maxzor View Post

    Must have been referring to the homemade AMD archive: https://repo.radeon.com/rocm/apt/5.0...r/rocm-opencl/
    Thanks, if there any instructions how to install it on Debian Testing? I tried a few .debs and dependencies escalated quickly to a package which is not present in AMD repository, nor in my system.

    Comment


    • #32
      Originally posted by Grinness View Post

      Not sure what you want to do.
      I run latest kernel and mesa (arch) and I have ROCM + pytorch, torchvision, torchtext, torchaudio (with ROCM support) running fine.
      Mesa is for 3d applications, ROCM general purpose compute on GPU
      I guess your question should have in scope only AMDGPU-PRO and MESA in parallel, not ROCM
      This answers my question, thx! :-)

      Comment


      • #33
        Originally posted by ms178 View Post
        ROCm on consumer GPUs beyond GFX9 (Vega) seems to be a WIP, that is not a great signal to consumers. I know they are prioritizing their server parts and GFX9 is the technical base there, but still. Consumer care about the support of their GPUs on day one (or when they can get their hands on one of these nowadays) - and not two years after release.
        I agree, but....how many consumers make gpgpu stuff on their home gpu?

        Comment


        • #34
          Originally posted by Mark625 View Post
          So OpenCL is basically a dead technology now. Nvidia is still putting all new features into CUDA and leaving OpenCL at 3.0 (aka 1.2). AMD is focusing on ROCm runtime and multiple tools that make it easier to port CUDA code sets to the AMD stack. I expect that OpenCL runtime support will continue for a long time, but the language itself will not advance in any meaningful way going forward.
          I think the way OpenCl will survive is C++ for OpenCl.
          Why?
          Because it seems that there is a great interesting for C++ in heterogeneous computation and there is a lot of signs, like SYCL, CUDA C++, AMD Hip, etc
          Seems that all gpu (and not only gpu) languages will be differen "dialects" of C++

          Maybe i'm wrong

          Comment


          • #35
            Originally posted by boboviz View Post

            I agree, but....how many consumers make gpgpu stuff on their home gpu?
            Oh, I don't know . . . . . how about 4 Million users and 205K hosts according to today's BoincStats BOINC combined stats. That is a not small number.

            Comment


            • #36
              Mesa 22 has been forked and will be released in a couple of months. There has been a lot of work done to Clover and it may be the solution to openCL on AMD that ROCm never was. But if MESA 22 turns out to be a bust Intel is starting to roll out and their stack is already working.

              Comment


              • #37
                ROCm might be useful for some of the machine learning guys to port their stuff over to. but for the average vectorized compute code outside of a data centre this thing is already dead. It died long ago, and a fifth version still with big holes in it won't change that. The only real hope is W[eb]GPU and perhaps Vulkan Compute which hasically *force* the GPU makers to expose compute on their consumer cards in a cross[ish] platform way.
                Last edited by vegabook; 10 February 2022, 07:05 PM.

                Comment


                • #38
                  Originally posted by vegabook View Post
                  The only real hope is W[eb]GPU and perhaps Vulkan Compute which hasically *force* the GPU makers to expose compute on their consumer cards in a cross[ish] platform way.
                  Is it though? I am not sure that the metaverse thingy or even just web usages cater to all the compute needs in the world, eternal debate... And vulkan compute still seems to have big issues? The landscape is very complex and moving fast, so it is hard to have both an accurate and complete view of it.
                  Yet you seem to me like deploying quite some energy into telling the story that ROCm is trash in these forums. I might be too much on the opposite behavior, oh well

                  Comment


                  • #39
                    Originally posted by boboviz View Post

                    I agree, but....how many consumers make gpgpu stuff on their home gpu?
                    Mobile phone makers advertise their new chips contain neural co-processors. 99.999% of phone users don't write neural software either.

                    "Consumers" by definition don't "make" stuff. They consume GPGPU applications if they are widely supported and available.

                    Comment


                    • #40
                      Really makes one wonder why does the industry love CUDA so much more when AMD's support of ROCM is this good?
                      /s

                      Comment

                      Working...
                      X