Michael
The Video is up:
Announcement
Collapse
No announcement yet.
Red Hat Developers Working Towards A Vendor-Neutral Compute Stack To Take On NVIDIA's CUDA
Collapse
X
-
I don't think being a good alternative to CUDA will be enough. Of course being a fully open source stack will bring a lot of people in itself that don't currently already use CUDA, but if you want folks to rebase from it to this new stack then it needs some killer feature that everyone would think is cool as hell that CUDA cant replicate.
I think that whole scenario is wishful thinking. But then again, that's what the kernel stated out as, it's what mesa started as and wine and many others.... Soooo.... Could work out.
Leave a comment:
-
Any viable competitor to CUDA needs a several things:
1) macOS support - a lot of developers use Mac's and given Apple's lack of Nvidia support there is a built in market / pent up demand for an alternative to CUDA.
2) per 1), can't be based on OpenCL as OpenCL is dead on macOS. Vulkan is the future and can be made to work on macOS.
3) significant investment in highly optimized runtime libraries - this is why CUDA has won so far. This will require someone to supply an ongoing budget for specialized developers to create and continue to maintain such libraries.
4) significant investment in nouveau - companies aren't going to throw away their investment in Nvidia hardware, and they aren't going to jump ship from one monopoly to a different monopoly.
5) major rethink on approach by the non-Nvidia hardware vendors, who all so far seem to be attempting to merely replace Nvidia by creating their own non-standard silos - see whatever mess AMD is attempting at the moment, Intels Neural Computer Stick.
6) work on cross platform tool support - this is a current weakness of CUDA given problematic support of Visual Studio on Windows and the Eclipse requirement on Linux. Build tooling around Visual Studio Code / Atom which can be made to work on any platform.
In short, being an alternative to CUDA that is open source won't be enough, you need to provide the performance and other benefits to get people to rewrite that large base of installed code / retrain on a new stack.
Leave a comment:
-
Originally posted by airlied View Post
The clang is just the compiler, you still need the CUDA headers and runtime library installed. Those are copyright NVIDIA for a start. They also control the language direction, so they can add new features that won't work on other GPUs and you are then left holding the bag.
Dave.
- Likes 1
Leave a comment:
-
Originally posted by pal666 View Postand you seem to completely misunderstand my remark regarding binding on gpu. if your "other language" does not run on gpu, you can't bind it to anything running on gpu, so there are no usecases for binding to other languages here.
Therefore questions of bindings should be considered not just in the scope of "write it all in C++".Last edited by shmerl; 19 November 2018, 02:50 AM.
Leave a comment:
-
Originally posted by shmerl View Post
Originally posted by shmerl View PostLinux is far from the only compiler targets, so just because it's easy for Linux doesn't mean it's easy in general.
on other platforms there are other platform-specific standards for name mangling (like ms abi which is supported by clang already), which you can follow...
...in some rare cases when it matters, because it does not matter at all when you compile from source or use same compiler.
btw, "other languages" usually have no issues with abi because they have exactly one implementation. so if you are using "other language" you can as well dictate the only c++ implementation of your choice, it will have same effect as having the one c++ abi
and you seem to completely misunderstand my remark regarding binding on gpu. if your "other language" does not run on gpu, you can't bind it to anything running on gpu, so there are no usecases for binding to other languages here.Last edited by pal666; 19 November 2018, 01:04 AM.
Leave a comment:
-
Originally posted by pal666 View Postconsidering qt is dominating, they surely were much easier to implement than gtk+. maybe even c++ had something to do with it
i'm not sure what do you mean by symbol naming, but on linux name mangling is standardized by itanium abi, so stop making excuses
C++ doesn't set any standards for it. Linux is far from the only compiler targets, so just because it's easy for Linux doesn't mean it's easy in general.Last edited by shmerl; 18 November 2018, 08:18 PM.
Leave a comment:
-
It is interesting what path Intel gonna take with their GPU coming in 2020. Don't make mistake, they don't give a *hit about gaming. All is about datacenters, AI and ML.
- Likes 1
Leave a comment:
-
Originally posted by andrew.corrigan View Post
Thank you for the reply, but I don't understand. If NVIDIA proposes another API or loses interest, how does that stop us from using CUDA? CUDA support in Clang (frontend) was implemented without NVIDIA. If the proposed stack provided a SPIR-V backend, then NVIDIA is out of the picture.
Dave.
- Likes 3
Leave a comment:
-
Originally posted by shmerl View PostIt doesn't matter on what you bind, it's the complexity of it. Sure, Qt has bindings but see how difficult they were to implement.
Originally posted by shmerl View PostAll this could really be simplified, if C++ cared to standardize its symbol naming and name mangling across all compilers.
Leave a comment:
Leave a comment: