Originally posted by GunpowaderGuy
View Post
Announcement
Collapse
No announcement yet.
David Airlie's LPC2018 Presentation On An "Open-Source CUDA"
Collapse
X
-
Michael Larabel
https://www.michaellarabel.com/
-
We get asked all the time by dev teams which API should they use and we always say OpenCL. It's completely portable, avoids vendor lock in, has a wealth of history and public track record. They can unit test on any laptop, desktop, server, container, grid, AWS, Azure, or whatever compute is available or cheaper.
Red Hat is getting twitchy because NVidia announced their own GPU based container architecture earlier this year with full CUDA support. Red Hat would like to believe that they own or drive container standards.
Also there is a movement inside several corporates to use NVidia for more machine learning in the datacenter. Since Red Hat is so prevalent there, it is logical that they would want access to the GPU in a less proprietary way.
Hence "open source" CUDA.
I have tested remote CUDA over Infiniband and it is quite powerful. With 40GbE in the datacenter, remote CUDA can be exploited further.
- Likes 1
Comment
-
Originally posted by edwaleni View PostWe get asked all the time by dev teams which API should they use and we always say OpenCL. It's completely portable, avoids vendor lock in, has a wealth of history and public track record. They can unit test on any laptop, desktop, server, container, grid, AWS, Azure, or whatever compute is available or cheaper.
Red Hat is getting twitchy because NVidia announced their own GPU based container architecture earlier this year with full CUDA support. Red Hat would like to believe that they own or drive container standards.
Also there is a movement inside several corporates to use NVidia for more machine learning in the datacenter. Since Red Hat is so prevalent there, it is logical that they would want access to the GPU in a less proprietary way.
Hence "open source" CUDA.
I have tested remote CUDA over Infiniband and it is quite powerful. With 40GbE in the datacenter, remote CUDA can be exploited further.
The talk was just me asking about what we can do upstream for Linux, not for Red Hat.
Dave.
- Likes 8
Comment
-
Originally posted by wizard69 View Post
I’m not sure I would imply that big of a difference but even if your description holds it doesn’t make sense to start all over. Instead build on OpenCL! Get the community that is using OpenCL now behind you and you might get some real progress maybe even some help!
Now i could be missing something completely here, but at this point another GPU compute project is not needed. At least we don’t need something that ignores current infrastructure.
- Likes 8
Comment
-
Originally posted by airlied View Post
This isn't a Red Hat led thing, not sure I can say that enough times in print or in the video.
The talk was just me asking about what we can do upstream for Linux, not for Red Hat.
Dave.
Since I had read about prior efforts of Red Hat to accommodate GP compute through Nouveau and chats with NVidia, I saw your effort as another one trying to reconcile the gaps.
Sorry if everyone keeps tying it back to RH. There has been press out there on their work, probably why everyone thinks you are tied to them.
- Likes 1
Comment
-
Originally posted by boboviz View PostAn open-source CUDA still exists: OpenCl
While there is nothing wrong with separate-source OpenCL per se, I think history has shown that programmers tend to prefer single-source programming models since they tend to be easier to get started with. This also makes them usually easier to teach, which I believe is why CUDA is taught so much more in universities than OpenCL. And the fact that CUDA gets taught everywhere may well have been one of the pillars of its success.
Please also see https://github.com/triSYCL/triSYCL/blob/master/doc/about-sycl.rst for a nice overview on the rationale behind SYCL.
I don't want to repeat everything that's explained there, but let me add that SYCL is far more programmer-friendly than OpenCL and even more programmer-friendly than CUDA since it e.g. automatically migrates data between host and device as needed in a clever way. Basically you get optimizations for free that you need to manually program in a cumbersome way in CUDA (and also OpenCL). SYCL really does more than OpenCL/CUDA.
Originally posted by Michael View Post
They haven't due to SVM requirement but with OpenCL-Next set to make it optional, I am told they will support OpenCL-Next.
In any case, even with NVIDIA supporting OpenCL-Next, their stance on OpenCL is clear with them removing e.g. OpenCL support from their profilers and tools... NVIDIA employees that I have talked with have also told me that they basically only support OpenCL because they still have some customers that have software relying on it, but they are working hard to convince everybody to switch to CUDA or OpenACC. So, I don't think that OpenCL on NVIDIA has a promising future. This is not a problem for SYCL though, see below
Originally posted by ms178 View PostOn the other hand, I don't get the impression that SYCL has gotten any meaningful traction yet (also AMD seems to have dropped support recently) and I wonder how he wants to change that to be a viable base for his efforts.- Codeplay's (proprietary, but freely available) ComputeCpp implementation, using OpenCL and SPIR/SPIR-V
- triSYCL, open-source SYCL implementation for CPUs with OpenMP and experimental support for Xilinx FPGAs with OpenCL and SPIR
- hipSYCL, open-source SYCL implementation on top of AMD HIP/NVIDIA CUDA, running on AMD and NVIDIA GPUs (I'm the developer of this SYCL implementation)
Strictly speaking, the SYCL standard at the moment requires that SYCL be implemented on top of OpenCL, but due to the growing interest in SYCL without OpenCL the current plan for SYCL-next is to officially allow SYCL on top of anything. In that way, people can write applications against SYCL (which will run anywhere), and then use a SYCL implementation that runs on top of whatever their hardware vendor supports best.
When people talk about a lack of success of SYCL I think many forget how young SYCL actually is. The first SYCL spec (1.2) came out in 2015, while the current SYCL 1.2.1 came out just in July, 2018. The first SYCL implementation (ComputeCpp) reached official conformance only in August, 2018. Before last august there simply was no non-beta SYCL implementation available!
However, I would agree that SYCL has a marketing problem because at the moment none of the big three AMD/Intel/NVIDIA is advertising SYCL as a primary solution for their hardware.
Originally posted by geearf View PostDidn't AMD release a while back something that should have been compatible with CUDA? I don't remember the name and if it was linked to HSA.
It's a quite nice solution if you want to target both AMD and NVIDIA. For NVIDIA support it only requires CUDA which is well-supported, so it is also pretty future-proof. But as CUDA-clone it also suffers from the same issues as CUDA. CUDA is now ten years old, and some aspects of it are really archaic..
Comment
Comment