Announcement

Collapse
No announcement yet.

CUDA-Python Reaches "GA" With NVIDIA CUDA 11.5 Release, __int128 Preview

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • CUDA-Python Reaches "GA" With NVIDIA CUDA 11.5 Release, __int128 Preview

    Phoronix: CUDA-Python Reaches "GA" With NVIDIA CUDA 11.5 Release, __int128 Preview

    NVIDIA has made available CUDA 11.5 today as the latest version of their popular but proprietary compute stack/platform. Notable with CUDA 11.5 is that CUDA-Python has reached general availability status...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Python binding for CUDA. This is the first really exciting thing I have heard about in months.

    Comment


    • #3
      Originally posted by hoohoo View Post
      Python binding for CUDA. This is the first really exciting thing I have heard about in months.
      Huh? It's been around for years.

      Comment


      • #4
        Originally posted by schmidtbag View Post
        Huh? It's been around for years.
        I use CUDA with C++. Thought about it and I suppose I have never gone looking for a Python binding, So I thought this was something new.

        Comment


        • #5
          Originally posted by hoohoo View Post
          I use CUDA with C++. Thought about it and I suppose I have never gone looking for a Python binding, So I thought this was something new.
          Haha nope. Not only has it been around for a while, but it's also quite polished. If you ever want to get into CUDA quickly, Python is great.
          In fact, Nvidia's efforts with Python are to me a prime example of why CUDA reigns supreme in GPU compute - it's all so well-documented and implemented that anyone with intermedia programming experience can figure it out. They provide lots of clean and helpful examples too.

          The OpenCL implementation, on the other hand, is basically just a wrapper to the C++ plugin and you're left to "figure it out", because most of the documentation is how you do things in C++ but not Python. At least, that was the case a few years ago when I first attempted it. Not sure if it got anywhere since then. With the direction Intel and AMD, maybe it got a little better. I'd much rather OpenCL because it is far more portable, but the documentation is so lacking that I would argue it's easier to just write your whole program in C++.

          Comment


          • #6
            Originally posted by schmidtbag View Post
            Haha nope. Not only has it been around for a while, but it's also quite polished. If you ever want to get into CUDA quickly, Python is great.
            In fact, Nvidia's efforts with Python are to me a prime example of why CUDA reigns supreme in GPU compute - it's all so well-documented and implemented that anyone with intermedia programming experience can figure it out. They provide lots of clean and helpful examples too.

            The OpenCL implementation, on the other hand, is basically just a wrapper to the C++ plugin and you're left to "figure it out", because most of the documentation is how you do things in C++ but not Python. At least, that was the case a few years ago when I first attempted it. Not sure if it got anywhere since then. With the direction Intel and AMD, maybe it got a little better. I'd much rather OpenCL because it is far more portable, but the documentation is so lacking that I would argue it's easier to just write your whole program in C++.
            I read through a couple of the Python example programs and it looks quite friendly.

            Comment

            Working...
            X