Announcement

Collapse
No announcement yet.

Red Hat Developers Working Towards A Vendor-Neutral Compute Stack To Take On NVIDIA's CUDA

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Red Hat Developers Working Towards A Vendor-Neutral Compute Stack To Take On NVIDIA's CUDA

    Phoronix: Red Hat Developers Working Towards A Vendor-Neutral Compute Stack To Take On NVIDIA's CUDA

    It's becoming more clear why Red Hat hired a Nouveau developer to work on SPIR-V/compute support for the open-source NVIDIA Linux driver even when that reverse-engineered driver's performance is very poor due to re-clocking / power management limitations for Maxwell and beyond. This appears to be part of a broader compute effort in pursuing a vendor-neutral compute stack across Intel, Radeon, and NVIDIA GPU platforms that could potentially take on NVIDIA's CUDA dominance...

    http://www.phoronix.com/scan.php?pag...-Compute-Stack

  • #2
    Why? Why not a CUDA to OpenCL/Vulkan translator instead?

    Comment


    • #3
      There is only a "small" issue: Nvidia performance sucks and will always suck because of the signed firmware. How do they plan to circumvent it?
      ## VGA ##
      AMD: X1950XTX, HD3870, HD5870
      Intel: GMA45, HD3000 (Core i5 2500K)

      Comment


      • #4
        tildearrow - Too many intermediate layers are bad for performance. And then you're limited to CUDA API's capabilities, and dependent on NVIDIA. Better to use a vendor neutral API like OpenCL. Hopefully Redhat is working with those people implementing OpenCL on Vulkan.

        Of course, it's still a good idea to build a CUDA to OpenCL library, because a lot of software (especially closed source) uses CUDA which is only hardware accelerated on NVIDIA GPUs. (Or even a CUDA over Vulkan library).

        Comment


        • #5
          Originally posted by tildearrow View Post
          Why? Why not a CUDA to OpenCL/Vulkan translator instead?
          To win the hearts and minds of the developers you need a really good dev environment that makes their life easy. Cuda has this currently - if Red Hat wants to win the hearts and minds they have to go this route. it is a noble cause and I hope that they can pull it off.

          Comment


          • #6
            Oh this is really fantastic. I always thought this stupid ass situation with Beignet and Clover and Rocm was freakin retarded.

            Comment


            • #7
              Their approach is too complicated, I don't think taking a lot of related tools, APIs and solutions and gluing them together will form a solution capable of out-competing CUDA. It will be an over-complicated clusterfuck. I'm probably not the target audience for this solution, but if I ever need a compute solution I pick something as simple as possible, that's rule #1, so it's gonna be either OpenMP or Vulkan's compute capabilities.
              Last edited by cl333r; 11-17-2018, 12:47 PM.

              Comment


              • #8
                Originally posted by cl333r View Post
                Their approach is too complicated, I don't think taking a lot of related tools, APIs and solutions and gluing them together will form a solution capable of out-competing CUDA. It will be an over-complicated clusterfuck. I'm probably not the target audience for this solution, but if I ever need a compute solution I pick something as simple as possible, that's rule #1, so it's gonna be either OpenMP or Vulkan's compute capabilities.
                Just a nitpick I guess, you're not wrong, I'm just saying isn't that exactly what open source collaboration is all about? Look at what the linux kernel -is-, look at what GCC is or LLVM or mesa or xorg. I mean all of them are clusterfucks. But they are clusterfucks of purpose driven functionality. And together they provide themost complete sets of functionality available.

                Comment


                • #9
                  The stack sounds great, with one _major_ exception: SYCL. There is over 10 years of successful development based on CUDA. CUDA is a proven programming model -- it extends C++ in a minimal fashion and there is a ton of code already written in it. Please don't bother with SYCL, it has already failed to achieve anything significant, and is doomed to fail like OpenCL before. Instead, re-target the CUDA support that is already implemented in Clang to target the proposed stack, circumventing nvptx/ptxas,and this will be a massive success.

                  Comment


                  • #10
                    Originally posted by cl333r View Post
                    Their approach is too complicated, I don't think taking a lot of related tools, APIs and solutions and gluing them together will form a solution capable of out-competing CUDA. It will be an over-complicated clusterfuck. I'm probably not the target audience for this solution, but if I ever need a compute solution I pick something as simple as possible, that's rule #1, so it's gonna be either OpenMP or Vulkan's compute capabilities.
                    They're basically saying they'll work on OpenCL support, and see if they can implement OpenCL on Vulkan to reduce the amount of work they need to do on newer GPUs. So.....nothing complicated about that.

                    In fact, all of the other compute stuff is more complicated and fragmented (Rocm, that Microsoft thing for GPU compute, and that GCC backend thing that runs on NVIDIA GPUs only).

                    Comment

                    Working...
                    X