Originally posted by bog_dan_ro
View Post
Announcement
Collapse
No announcement yet.
AMD ROCm 6.0 Now Available To Download With MI300 Support, PyTorch FP8 & More AI
Collapse
X
-
- Likes 2
-
Originally posted by oleid View Post
Do you have a link?
https://youtu.be/pVl25BbczLI?si=poMTbfa-TsQB_yhU&t=1684 ?
Leave a comment:
-
Originally posted by ms178 View Post
Dropping Vega support is also a big regression in that regard, IMHO.
- Likes 1
Leave a comment:
-
Originally posted by s_j_newbury View PostI wouldn't say nothing explicity excluding any GPUs. Having just built a full stack for Vega10 and Raven Ridge, there are a couple things which come to mind:- rocFFT filters out support at build-time for Vega10 and Polaris.
- Composable_Kernel uses inline instructions only introduced in gfx906 without fallbacks. I've actually written fallbacks for this and will create a PR when I get around to it.
Other than that, Vega10 (RX Vega64) does work absolutely fine. Better than ever, actually. I've not managed to get Raven Ridge working yet though, I think it's a problem with the kernel driver.
Originally posted by s_j_newbury View PostThe problem is a communications failure from AMD. Support to AMD, is active dedicated developer support for specific products sold to customers. "You buy these products and we'll make sure it works for you, and actively worked to solve any issues you may have." It has nothing to do with what most people *think* it means: "Code is in place to work with *these GPUs*, otherwise you're out of luck." That's why people get so annoyed and indignant. "Supported" as a term is just too ambiguous, what's supported? The hardware? The software? Certain cards, or families? No, what AMD means is *customers*: You will get "support" if you own these products!
As mentioned earlier in the thread by bridgman:
Yep... I'm trying to get two changes implemented:
#1 - distinguish between "not tested" and "not supported in the code" as everyone has suggested
#2 - for "not tested" parts do some kind of periodic testing so every part at least gets covered once during a release cycle even if not final QA
There is other work already going on to increase the breadth of supported hardware - the points above are just for chips/boards that still don't fit into the "tested at all points in the development cycle including final QA" coverage that we require to call something supported.
- Likes 4
Leave a comment:
-
Originally posted by superm1 View PostSomething I want to mention is that this is just the official support stance. It's not necessarily what works. There is nothing in the software stack to explicitly exclude any GPU.- rocFFT filters out support at build-time for Vega10 and Polaris.
- Composable_Kernel uses inline instructions only introduced in gfx906 without fallbacks. I've actually written fallbacks for this and will create a PR when I get around to it.
Other than that, Vega10 (RX Vega64) does work absolutely fine. Better than ever, actually. I've not managed to get Raven Ridge working yet though, I think it's a problem with the kernel driver.
For example I can run ROCm related stuff on a mobile 7700S even though it's not in that list.
Think about more like "This is what AMD actively tests on and if you have problems they'll be willing to help with them"Last edited by s_j_newbury; 16 December 2023, 05:08 PM.
- Likes 4
Leave a comment:
-
I just build llama.cpp with ROCm 6.0 and it runs just fine on my 7900 XT. I think they forgot to update the cards list. Also RDNA2 cards I guess they are still supported.
What I didn't see, it was the miraculous 2.6X LLMs speed improvement, that was shown a few days ago in the Ai event.
llama-bench gave me the same values when I compiled & tested it with ROCm 5.7.2, 5.7.3 and 6.0.Last edited by bog_dan_ro; 16 December 2023, 12:30 PM.
- Likes 5
Leave a comment:
-
Originally posted by Lycanthropist View PostVery nice Christmas present, AMD!
I hope ROCm 6 makes it to the PyTorch Nightly repository soon. It is still at 5.7 at the moment:
https://pytorch.org/get-started/locally/
Cheers!
- Likes 1
Leave a comment:
Leave a comment: