Originally posted by bridgman
View Post
is there an effort to simplify/streamline the adoption of ROCm for end users?
Conceptually ROCm/HIP is fascinating and very promising, but it does not compare with the simplicity of installation (and 3rd party support/implementation) of nvidia.
Years ago I used to do cuda on nvidia,
Now I am on polaris (arch linux), i just spent 4 days (probably more, between AMD website, git repos, Arch AUR) to try to get pytorch, torchvisiom, torchtext to run.
Tried from native rocm on arch, to dockers. An ugly mess!
Finally yesterday night by creating my own pkgbuild (and a lot of kicking and screaming) got all instaleld
... to find out that the gpu computing stalls, no errrors, no dmesg -- just trying a jupyter notebook tutorial from torchtext
How do I debug?
How do i ask for help? ( I assume first thing will be: arch is not supported, followed by oh, polaris is not officially supported)
On top of that ROCm 4.1 is out and I have to recompile EVERITHING ....
Come on. ...
Leave a comment: