Announcement

Collapse
No announcement yet.

GPUCC: Google's Open-Source CUDA Compiler

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • GPUCC: Google's Open-Source CUDA Compiler

    Phoronix: GPUCC: Google's Open-Source CUDA Compiler

    Last month I wrote about how Google has been working on CUDA compiler optimizations in LLVM and they were claiming to achieve results where their open-source compiler work was generating better code than NVIDIA's own NVCC compiler. More details are now available...

    http://www.phoronix.com/scan.php?pag...UDA-GPGPU-Comp

  • #2
    not every google engineer is smart enough to use open apis

    Comment


    • #3
      Originally posted by pal666 View Post
      not every google engineer is smart enough to use open apis
      What do you mean exactly?

      "Not every X is smart enough to use Y" is true in general. It is common that people differ in what they can do.

      Comment


      • #4
        Funny how people continue to use CUDA in the face of open standards.

        Shows you who really cares about contributing to the betterment of the free world.

        Comment


        • #5
          Funny how open source projects like Caffe, Torch, etc... continue to use CUDA in the face of open standards. Must mean that the open standards SUCK! If OpenCL wants to best CUDA, it had better make easy and accessible C/C++ language extensions like CUDA. It had also better support most C/C++ features on compute devices. It should also provide lots of support libraries for random number generation, numerical linear algebra, image processing, and convolutional neural networks. Until then, everyone will continue to use CUDA.

          Comment


          • #6
            Originally posted by fuzz View Post
            Funny how people continue to use CUDA in the face of open standards.

            Shows you who really cares about contributing to the betterment of the free world.
            its an annoying circular problem, nobody or almost nobody do GPGPU(enterprises i mean) on AMD for several reason, starting with the lack of high density server dedicated hardware(something like Tesla) for years, horribly unstable OpenCL support(has improved a lot in recent years tho), etc.

            Basically this make nVidia Quadro+Tesla cards the everyone's default option and by that time CUDA was already stable and efficient for GPGPU, so for a long time this companies develop their entire ecosystems on it and to be honest OpenCL doesn't provide enough WoW effect to cost effectively justify move all this working tested trustworthy code to OpenCL just yet.

            If there is something that could trigger this migration could be OpenCL 2.1 due to SPIR-V and Vulkan integration providing enough feature sets to justify the change, Of course assuming nVidia doesn't release a SPIR-V version of CUDA, in that case forget about it

            Comment


            • #7
              Originally posted by jrch2k8 View Post

              its an annoying circular problem, nobody or almost nobody do GPGPU(enterprises i mean) on AMD for several reason, starting with the lack of high density server dedicated hardware(something like Tesla) for years, horribly unstable OpenCL support(has improved a lot in recent years tho), etc.

              Basically this make nVidia Quadro+Tesla cards the everyone's default option and by that time CUDA was already stable and efficient for GPGPU, so for a long time this companies develop their entire ecosystems on it and to be honest OpenCL doesn't provide enough WoW effect to cost effectively justify move all this working tested trustworthy code to OpenCL just yet.

              If there is something that could trigger this migration could be OpenCL 2.1 due to SPIR-V and Vulkan integration providing enough feature sets to justify the change, Of course assuming nVidia doesn't release a SPIR-V version of CUDA, in that case forget about it
              Actually I'm not so sure that OpenCL will even be required given that both LLVM, and .NET, will be able to convert their IL to SPIR-V, and as a result it's not unlikely that language constructs like async/await may in the future replace the concept of GPGPU compute libraries,

              Comment


              • #8
                so is this not actually available yet? no git or anything to try it out right now?

                Comment


                • #9
                  Originally posted by jrch2k8 View Post
                  If there is something that could trigger this migration could be OpenCL 2.1 due to SPIR-V and Vulkan integration providing enough feature sets to justify the change, Of course assuming nVidia doesn't release a SPIR-V version of CUDA, in that case forget about it
                  Did you notice the SPIR-V specification lists compatibility with OpenCL 1.2? Because we all know a certain green monster who refuses to update...

                  Comment

                  Working...
                  X