The people dual booting have been waiting for this update for a very long time, IIRC new MIOpen was the thing preventing Pytorch from running on Windows. I think this is the first time we can do crossplatform benchmarking of native ROCm & pytorch. I'm excited to do some tests around this.
What's different for you with 6.1, weren't you have to enjoy it with 6.0? Is it just that morons like Tomshardware still think that highend AMD GPUs can't do SD/LLM at all?
To be fair which iGPUs from other vendors (without dedicated AI hardware) has good support?
I have a 5700G with 128GB ram (used as VM host) on paper it has potential for compute. 16GB UMA Frame Buffer and IIRC max of 64GB SVM. It's a low power system which feels like it as a lot to offer for SD/LLMs, but in practice I think it's a waste of time trying to do anything meaningful on it because everyone at AMD is focused on the highend chips. The whole world is looking at Nvidia right now and everyone else wants a piece of the hype-pie. I read most people doing ROCm compute straight up disable iGPUs. 🙈​
Originally posted by Lycanthropist
View Post
Originally posted by aviallon
View Post
I have a 5700G with 128GB ram (used as VM host) on paper it has potential for compute. 16GB UMA Frame Buffer and IIRC max of 64GB SVM. It's a low power system which feels like it as a lot to offer for SD/LLMs, but in practice I think it's a waste of time trying to do anything meaningful on it because everyone at AMD is focused on the highend chips. The whole world is looking at Nvidia right now and everyone else wants a piece of the hype-pie. I read most people doing ROCm compute straight up disable iGPUs. 🙈​
Comment