Announcement

Collapse
No announcement yet.

David Airlie's LPC2018 Presentation On An "Open-Source CUDA"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by GunpowaderGuy View Post
    Michael do you have the article at hand ? I already forgot the reasoning given for that desicion , or how final it was
    Don't have link offhand as busy with RX 590, but just remember what I heard myself from Khronos that it was a clear decision.
    Michael Larabel
    http://www.michaellarabel.com/

    Comment


    • #22
      We get asked all the time by dev teams which API should they use and we always say OpenCL. It's completely portable, avoids vendor lock in, has a wealth of history and public track record. They can unit test on any laptop, desktop, server, container, grid, AWS, Azure, or whatever compute is available or cheaper.

      Red Hat is getting twitchy because NVidia announced their own GPU based container architecture earlier this year with full CUDA support. Red Hat would like to believe that they own or drive container standards.

      Also there is a movement inside several corporates to use NVidia for more machine learning in the datacenter. Since Red Hat is so prevalent there, it is logical that they would want access to the GPU in a less proprietary way.

      Hence "open source" CUDA.

      I have tested remote CUDA over Infiniband and it is quite powerful. With 40GbE in the datacenter, remote CUDA can be exploited further.

      Comment


      • #23
        Didn't AMD release a while back something that should have been compatible with CUDA? I don't remember the name and if it was linked to HSA.

        Comment


        • #24
          Originally posted by edwaleni View Post
          We get asked all the time by dev teams which API should they use and we always say OpenCL. It's completely portable, avoids vendor lock in, has a wealth of history and public track record. They can unit test on any laptop, desktop, server, container, grid, AWS, Azure, or whatever compute is available or cheaper.

          Red Hat is getting twitchy because NVidia announced their own GPU based container architecture earlier this year with full CUDA support. Red Hat would like to believe that they own or drive container standards.

          Also there is a movement inside several corporates to use NVidia for more machine learning in the datacenter. Since Red Hat is so prevalent there, it is logical that they would want access to the GPU in a less proprietary way.

          Hence "open source" CUDA.

          I have tested remote CUDA over Infiniband and it is quite powerful. With 40GbE in the datacenter, remote CUDA can be exploited further.
          This isn't a Red Hat led thing, not sure I can say that enough times in print or in the video.

          The talk was just me asking about what we can do upstream for Linux, not for Red Hat.

          Dave.

          Comment


          • #25
            Originally posted by wizard69 View Post

            I’m not sure I would imply that big of a difference but even if your description holds it doesn’t make sense to start all over. Instead build on OpenCL! Get the community that is using OpenCL now behind you and you might get some real progress maybe even some help!

            Now i could be missing something completely here, but at this point another GPU compute project is not needed. At least we don’t need something that ignores current infrastructure.
            I think you missed the bit where you watched the video.

            Comment


            • #26
              Originally posted by airlied View Post

              This isn't a Red Hat led thing, not sure I can say that enough times in print or in the video.

              The talk was just me asking about what we can do upstream for Linux, not for Red Hat.

              Dave.
              My remarks were just an open ended commentary on the state of things, not trying to draw a connection directly between your work and Red Hat.

              Since I had read about prior efforts of Red Hat to accommodate GP compute through Nouveau and chats with NVidia, I saw your effort as another one trying to reconcile the gaps.

              Sorry if everyone keeps tying it back to RH. There has been press out there on their work, probably why everyone thinks you are tied to them.

              Comment


              • #27
                Originally posted by boboviz View Post
                An open-source CUDA still exists: OpenCl
                Not really. While they kind of solve similar problems, their approaches are very different. OpenCL is a separate-source programming model, where host and device code are strictly separate and cannot share common code. CUDA is a single-source programming-model, i.e. host and device are in the same file and code can be shared. This allows for example to have things like templated kernels which are actually highly desirable for any kind of GPU-based library. SYCL, like CUDA, is a single-source programming model with these benefits.

                While there is nothing wrong with separate-source OpenCL per se, I think history has shown that programmers tend to prefer single-source programming models since they tend to be easier to get started with. This also makes them usually easier to teach, which I believe is why CUDA is taught so much more in universities than OpenCL. And the fact that CUDA gets taught everywhere may well have been one of the pillars of its success.

                Please also see https://github.com/triSYCL/triSYCL/blob/master/doc/about-sycl.rst for a nice overview on the rationale behind SYCL.

                I don't want to repeat everything that's explained there, but let me add that SYCL is far more programmer-friendly than OpenCL and even more programmer-friendly than CUDA since it e.g. automatically migrates data between host and device as needed in a clever way. Basically you get optimizations for free that you need to manually program in a cumbersome way in CUDA (and also OpenCL). SYCL really does more than OpenCL/CUDA.

                Originally posted by Michael View Post

                They haven't due to SVM requirement but with OpenCL-Next set to make it optional, I am told they will support OpenCL-Next.
                I've heard that too, but I've never understood why SVM should be a problem for NVIDIA. SVM is basically the same thing as CUDA unified memory which has been in CUDA for several years now. Implementing OpenCL SVM on top of CUDA unified memory should be trivial for NVIDIA. Does anybody know more about that?
                In any case, even with NVIDIA supporting OpenCL-Next, their stance on OpenCL is clear with them removing e.g. OpenCL support from their profilers and tools... NVIDIA employees that I have talked with have also told me that they basically only support OpenCL because they still have some customers that have software relying on it, but they are working hard to convince everybody to switch to CUDA or OpenACC. So, I don't think that OpenCL on NVIDIA has a promising future. This is not a problem for SYCL though, see below

                Originally posted by ms178 View Post
                On the other hand, I don't get the impression that SYCL has gotten any meaningful traction yet (also AMD seems to have dropped support recently) and I wonder how he wants to change that to be a viable base for his efforts.
                AMD has NOT dropped support for SYCL itself. AMD has recently removed support for the OpenCL SPIR extension. SYCL, as per the SYCL standard, does not necessarily have to use SPIR. However, Codeplay's ComputeCpp SYCL implementation (the currently most mature SYCL implementation) requires SPIR or SPIR-V support. There are several SYCL implementations currently available, not all of which require SPIR:
                • Codeplay's (proprietary, but freely available) ComputeCpp implementation, using OpenCL and SPIR/SPIR-V
                • triSYCL, open-source SYCL implementation for CPUs with OpenMP and experimental support for Xilinx FPGAs with OpenCL and SPIR
                • hipSYCL, open-source SYCL implementation on top of AMD HIP/NVIDIA CUDA, running on AMD and NVIDIA GPUs (I'm the developer of this SYCL implementation)
                AMD's decision only affects ComputeCpp, with e.g. hipSYCL SYCL can be executed on AMD and NVIDIA GPUs no matter what their stance on OpenCL is. This means that SYCL is in fact more future-proof as far as company-policy decisions are concerned since it can be run on many different runtimes (not only OpenCL, also OpenMP and HIP/CUDA).

                Strictly speaking, the SYCL standard at the moment requires that SYCL be implemented on top of OpenCL, but due to the growing interest in SYCL without OpenCL the current plan for SYCL-next is to officially allow SYCL on top of anything. In that way, people can write applications against SYCL (which will run anywhere), and then use a SYCL implementation that runs on top of whatever their hardware vendor supports best.

                When people talk about a lack of success of SYCL I think many forget how young SYCL actually is. The first SYCL spec (1.2) came out in 2015, while the current SYCL 1.2.1 came out just in July, 2018. The first SYCL implementation (ComputeCpp) reached official conformance only in August, 2018. Before last august there simply was no non-beta SYCL implementation available!

                However, I would agree that SYCL has a marketing problem because at the moment none of the big three AMD/Intel/NVIDIA is advertising SYCL as a primary solution for their hardware.

                Originally posted by geearf View Post
                Didn't AMD release a while back something that should have been compatible with CUDA? I don't remember the name and if it was linked to HSA.
                You probably mean AMD HIP which is part of the ROCm platform:
                https://github.com/ROCm-Developer-Tools/HIP
                It's a quite nice solution if you want to target both AMD and NVIDIA. For NVIDIA support it only requires CUDA which is well-supported, so it is also pretty future-proof. But as CUDA-clone it also suffers from the same issues as CUDA. CUDA is now ten years old, and some aspects of it are really archaic..

                Comment

                Working...
                X