Announcement

Collapse
No announcement yet.

Radeon ROCm 5.0 Released With Some RDNA2 GPU Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Maxzor
    replied
    I agree with everything you said there^. I have been near the stack for two months while packaging it and have beefs too.
    I recently got in touch with various engineers at the Exascale Supercomputing Project (ElCap, ANL...) from spack's slack, and while AMD has only eyes for them as if they were their golden goose, even at the ESP they are telling AMD to care about wider distribution of the stack through Linux distributions...
    Probably some dysfunctionings in the upper management, or at least questionable strategic choices, given how much more cards they could sell to prosumers & consumers. In my opinion it is AMD engineers, volunteering to do the work with distros, while not being fully approved by their hierarchy, that will save the success of the stack, if it's not too late.
    Last edited by Maxzor; 11 February 2022, 05:46 AM.

    Leave a comment:


  • vegabook
    replied
    Originally posted by Maxzor View Post
    Is it though? I am not sure that the metaverse thingy or even just web usages cater to all the compute needs in the world, eternal debate... And vulkan compute still seems to have big issues? The landscape is very complex and moving fast, so it is hard to have both an accurate and complete view of it.
    Yet you seem to me like deploying quite some energy into telling the story that ROCm is trash in these forums. I might be too much on the opposite behavior, oh well
    BIg "vertical" stacks for sure cannot use consumer APIs like Vulkan or WebGPU. Think the machine learning frameworks. robotics, video processing etc where libraries built by the manufacturer on their own stack will always give much deeper and efficient access to the hardware's capabilities. Think CUdnn, Cublas, Rocblas all this sector-specific stuff. But take a vector programming language like R, Numpy, or even emergent vector languages like Futhark or BQN, these are generalist vector languages which can easily target a smaller cross platform abstraction which I believe WebGPU and Vulkan can enable quite easily.

    My beef with AMD is indeed perhaps a bit obsessive, but I have been burned hard by ROCm having wasted years waiting for it to be usable from a consumer standpoint, only to be serially disappointed and I don't see much here in V5 still that will democratise GPU compute access. Nvidia gets a lot of flak for its closed-source strategy, and I don't like it either, but at least it doesn't stupidly segment the market trying to make it hard for consumers to do GPU compute (FP64 aside). I keep reminding in these forums, that the exact same Cuda stack works perfectly, out of the box, from a 100 000 USD DGX cluster, right down to a 59 dollar Jetson Nano, and including all the GTX/RTX cards in between. It is this ease of access which has made it dominant, to my open-source chagrin.
    Last edited by vegabook; 11 February 2022, 04:34 AM.

    Leave a comment:


  • Maxzor
    replied
    Originally posted by piorunz View Post

    Thanks, if there any instructions how to install it on Debian Testing? I tried a few .debs and dependencies escalated quickly to a package which is not present in AMD repository, nor in my system.
    which package?

    Leave a comment:


  • flavonol
    replied
    Originally posted by bridgman View Post

    The ROCm stack up to OpenCL on RDNA 1/2 has been included in the AMDGPU-PRO driver packages for over a year - it's the math libraries and ML framework support (which have optimized assembly code for each GPU) that are still under development.
    I see. I've been relying on the ROCm docs for installation instructions (which didn't work properly for me last time I tried, although that was a while ago) and the table of supported GPUs.

    Originally posted by bridgman View Post
    I believe our range of supported distros is actually a bit wider than Intel's ...
    Looks like you're right in the sense that older versions of Red Hat/CentOS & SUSE are supported, although NEO currently appears to be packaged for more distributions. I recognize that being packaged for a distribution ≠ support for said distribution, but it is nice.

    Originally posted by bridgman View Post

    ... although I expect both will continue to grow as a consequence of current distro packaging/integration efforts.
    I hope your expectation is right. I'd especially appreciate Fedora Workstation support, as it's more up-to-date than Red Hat and happens to be what I use currently.
    Last edited by flavonol; 11 February 2022, 03:34 AM.

    Leave a comment:


  • clintar
    replied
    Really makes one wonder why does the industry love CUDA so much more when AMD's support of ROCM is this good?
    /s

    Leave a comment:


  • billyswong
    replied
    Originally posted by boboviz View Post

    I agree, but....how many consumers make gpgpu stuff on their home gpu?
    Mobile phone makers advertise their new chips contain neural co-processors. 99.999% of phone users don't write neural software either.

    "Consumers" by definition don't "make" stuff. They consume GPGPU applications if they are widely supported and available.

    Leave a comment:


  • Maxzor
    replied
    Originally posted by vegabook View Post
    The only real hope is W[eb]GPU and perhaps Vulkan Compute which hasically *force* the GPU makers to expose compute on their consumer cards in a cross[ish] platform way.
    Is it though? I am not sure that the metaverse thingy or even just web usages cater to all the compute needs in the world, eternal debate... And vulkan compute still seems to have big issues? The landscape is very complex and moving fast, so it is hard to have both an accurate and complete view of it.
    Yet you seem to me like deploying quite some energy into telling the story that ROCm is trash in these forums. I might be too much on the opposite behavior, oh well

    Leave a comment:


  • vegabook
    replied
    ROCm might be useful for some of the machine learning guys to port their stuff over to. but for the average vectorized compute code outside of a data centre this thing is already dead. It died long ago, and a fifth version still with big holes in it won't change that. The only real hope is W[eb]GPU and perhaps Vulkan Compute which hasically *force* the GPU makers to expose compute on their consumer cards in a cross[ish] platform way.
    Last edited by vegabook; 10 February 2022, 07:05 PM.

    Leave a comment:


  • MadeUpName
    replied
    Mesa 22 has been forked and will be released in a couple of months. There has been a lot of work done to Clover and it may be the solution to openCL on AMD that ROCm never was. But if MESA 22 turns out to be a bust Intel is starting to roll out and their stack is already working.

    Leave a comment:


  • Keith Myers
    replied
    Originally posted by boboviz View Post

    I agree, but....how many consumers make gpgpu stuff on their home gpu?
    Oh, I don't know . . . . . how about 4 Million users and 205K hosts according to today's BoincStats BOINC combined stats. That is a not small number.

    Leave a comment:

Working...
X