One of many interesting and original open-source projects to be started in 2020 was ZLUDA, an open-spurce drop-in CUDA implementation for Intel graphics.
Announcement
Collapse
No announcement yet.
ZLUDA v2 Released For Drop-In CUDA On Intel Graphics
Collapse
X
-
I'm actually quite excited to hear about this. Will have to do a lot more looking into it, and benchmark with some in-house software to see if it might be viable. While it wouldn't completely remove our CUDA dependence, for the programs we have ourselves, or have the source for, not needing $10,000 GPUs for some projects would be amazing.
Comment
-
Originally posted by kpedersen View Post
They would be shooting themselves in the foot. Companies will not be pleased if NVIDIA keeps making breaking changes to their API and will seek alternatives.
The only risk is if NVIDIA keep adding "new" little features and then spreading the word that ZLUDA is "out of date". Luckily only students (and a few too many open-source developers) blindly use the latest and greatest gimmicks needlessly.
Nvidia likes to strong-arm customers and clients, just a few days ago started to slow down mining performance on gaming GPUs while releasing dedicated mining hardware. It's a good idea. Gamers would probably love them for doing it, but the implementation is not very reassuring. Modifying performance of a sold product through a driver update... I don't care what the application is, this should not be legal IMO. Let's see if anything comes from it.
Comment
-
Originally posted by Paradigm Shifter View Postnot needing $10,000 GPUs for some projects would be amazing.
The other thing that suggests is that what you're doing with it probably using more advanced API functions they likely haven't yet translated (i.e. tensor cores). And who knows what their level of support is for precompiled CUDA kernels, if at all.
- Likes 2
Comment
-
Seems like it might be possible to support on AMD via hipSYCL? The Phoronix article was a bit misleading on the oneAPI for AMD support though, so linking to a comment from someone involved in the project with their corrections:
- Likes 1
Comment
-
Originally posted by coder View PostIf you need such Nvidia GPUs, then you're not going to find anything comparable from Intel for a while. And when you do, it'll probably cost $9k (if not $12k).
The other thing that suggests is that what you're doing with it probably using more advanced API functions they likely haven't yet translated (i.e. tensor cores). And who knows what their level of support is for precompiled CUDA kernels, if at all.
- Likes 1
Comment
-
Originally posted by Paradigm Shifter View Postit's just those few occasions where we do need more, we need a lot more.
At least in the case of AI training, the advice to buy your own is usually when you're keeping it busy for most of the time.
- Likes 1
Comment
-
Originally posted by coder View PostI have no first-hand experience with it, but cloud-based GPU services seem like they'd make economic sense for such cases where a very large amount of compute is needed infrequently.
At least in the case of AI training, the advice to buy your own is usually when you're keeping it busy for most of the time.
Comment
-
Originally posted by Paradigm Shifter View PostI had hopes for AVX-512, and for AMD too, but as yet, CUDA still rules the roost.
And AMD's problem (in AI) was always competing against the previous Nvidia GPU. But it looks like their Matrix cores might've finally managed to leap-frog Nvidia, at least for some use cases. Now, if they can just get out of their own way and work on the software situation. However, to be really successful, they're going to have to find a way to build more enthusiasm for their GPU and compute products.
- Likes 1
Comment
Comment