Originally posted by Maxzor
View Post
Announcement
Collapse
No announcement yet.
Radeon ROCm 5.0 Released With Some RDNA2 GPU Support
Collapse
X
-
Originally posted by Grinness View Post
Not sure what you want to do.
I run latest kernel and mesa (arch) and I have ROCM + pytorch, torchvision, torchtext, torchaudio (with ROCM support) running fine.
Mesa is for 3d applications, ROCM general purpose compute on GPU
I guess your question should have in scope only AMDGPU-PRO and MESA in parallel, not ROCM
- Likes 1
Comment
-
Originally posted by ms178 View PostROCm on consumer GPUs beyond GFX9 (Vega) seems to be a WIP, that is not a great signal to consumers. I know they are prioritizing their server parts and GFX9 is the technical base there, but still. Consumer care about the support of their GPUs on day one (or when they can get their hands on one of these nowadays) - and not two years after release.
- Likes 1
Comment
-
Originally posted by Mark625 View PostSo OpenCL is basically a dead technology now. Nvidia is still putting all new features into CUDA and leaving OpenCL at 3.0 (aka 1.2). AMD is focusing on ROCm runtime and multiple tools that make it easier to port CUDA code sets to the AMD stack. I expect that OpenCL runtime support will continue for a long time, but the language itself will not advance in any meaningful way going forward.
Why?
Because it seems that there is a great interesting for C++ in heterogeneous computation and there is a lot of signs, like SYCL, CUDA C++, AMD Hip, etc
Seems that all gpu (and not only gpu) languages will be differen "dialects" of C++
Maybe i'm wrong
- Likes 1
Comment
-
ROCm might be useful for some of the machine learning guys to port their stuff over to. but for the average vectorized compute code outside of a data centre this thing is already dead. It died long ago, and a fifth version still with big holes in it won't change that. The only real hope is W[eb]GPU and perhaps Vulkan Compute which hasically *force* the GPU makers to expose compute on their consumer cards in a cross[ish] platform way.Last edited by vegabook; 10 February 2022, 07:05 PM.
Comment
-
Originally posted by vegabook View PostThe only real hope is W[eb]GPU and perhaps Vulkan Compute which hasically *force* the GPU makers to expose compute on their consumer cards in a cross[ish] platform way.
Yet you seem to me like deploying quite some energy into telling the story that ROCm is trash in these forums. I might be too much on the opposite behavior, oh well
- Likes 1
Comment
-
Originally posted by boboviz View Post
I agree, but....how many consumers make gpgpu stuff on their home gpu?
"Consumers" by definition don't "make" stuff. They consume GPGPU applications if they are widely supported and available.
- Likes 1
Comment
Comment