Originally posted by Anderse4
View Post
Upon glancing over the bug report I linked I noticed it is still open in IPEX though it is possible they just did not do a good job of closing the issue or updating it / linking it if they did something to address it.
I am / was aware pytorch 2.5 only recently gained initial official "xpu" native support outside of IPEX though it is not clear to me how the two options compare in capability at this time and to what extent the native xpu support is in most / all respects equal to or better than what is now achievable with IPEX+pytorch. Obviously there is some older SW out there that is still configured / written / documented for only concerning IPEX use with pytorch (including the AFAIK still remaining 2.5 pytorch IPEX support) so it'd be nice for it to be fixed even if IPEX relevance is waning over time / pytorch versions unless it's quite obsolete already. But in some cases it may be easy enough to port SW that was using ipex to just "xpu" native reference.
I also recently noticed this:
..which speaks of some possible work-arounds for the compute-runtime / opencl / level zero levels where the overall 4G limit is also otherwise imposed. Someone had mentioned coming up with DIY allocation workarounds in the other thread I mentioned so maybe these changes improve upon or echo those unofficial community ones for OCL / level-zero.
I have been meaning to revisit the ARC7 compute status and development but haven't had enough time to track what has happened
in the last several months (besides the pytorch 2.5 news) so it's good if it has improved in any ways since last I had investigated.
I am curious if the rumors about a battlemage series card with 24G VRAM might come true for general consumer release and if so what that will be since that's sort of the possibility I was hoping to see eventually from a newer ARC card generation.
Comment