Google Continues Working On CUDA Compiler Optimizations In LLVM
While it will offend some that Google continues to be investing in NVIDIA's CUDA GPGPU language rather than an open standard like OpenCL, the Google engineers continue making progress on a speedy, open-source CUDA with LLVM.
Jingyue Wu of Google will be speaking at this month's 2015 LLVM Developer Meeting about optimizing LLVM GPGPU. His talk abstract covers how they're primarily focused on CUDA. It reads, "This talk presents Google’s effort of optimizing LLVM for CUDA. When we started this effort, LLVM was well-tuned for CPUs but there had been little public work on improving its GPU performance. We developed, tuned, and augmented several general and CUDA-specific optimization passes. As a result, our LLVM-based compiler generates better code than nvcc on key end-to-end internal benchmarks and is on par with nvcc on a variety of open-source benchmarks."
It's interesting that they've been able to get their internal CUDA codes outperforming NVIDIA's nvcc CUDA compiler. It was earlier this year I wrote about Google developers working on CUDA changes in LLVM. Hopefully at the meeting they will shed more light on their ultimate CUDA/LLVM plans. While CUDA continues to dominate the HPC field, hopefully OpenCL+SPIR-V will be able to gain more traction moving forward as an open standard for high-performance GPGPU computing.
At this year's LLVM meeting will also be separate talks about OpenMP GPU acceleration with LLVM, LLVM as a compilation framework for graphics processors (led in part by AMD's Tom Stellard), and other GPU-involved compiler talks. This year's meeting takes place on 29 and 30 October in San Jose.
Jingyue Wu of Google will be speaking at this month's 2015 LLVM Developer Meeting about optimizing LLVM GPGPU. His talk abstract covers how they're primarily focused on CUDA. It reads, "This talk presents Google’s effort of optimizing LLVM for CUDA. When we started this effort, LLVM was well-tuned for CPUs but there had been little public work on improving its GPU performance. We developed, tuned, and augmented several general and CUDA-specific optimization passes. As a result, our LLVM-based compiler generates better code than nvcc on key end-to-end internal benchmarks and is on par with nvcc on a variety of open-source benchmarks."
It's interesting that they've been able to get their internal CUDA codes outperforming NVIDIA's nvcc CUDA compiler. It was earlier this year I wrote about Google developers working on CUDA changes in LLVM. Hopefully at the meeting they will shed more light on their ultimate CUDA/LLVM plans. While CUDA continues to dominate the HPC field, hopefully OpenCL+SPIR-V will be able to gain more traction moving forward as an open standard for high-performance GPGPU computing.
At this year's LLVM meeting will also be separate talks about OpenMP GPU acceleration with LLVM, LLVM as a compilation framework for graphics processors (led in part by AMD's Tom Stellard), and other GPU-involved compiler talks. This year's meeting takes place on 29 and 30 October in San Jose.
4 Comments