NVIDIA CUDA 6 Makes GPGPU Programming Simpler
Written by Michael Larabel in NVIDIA on 14 November 2013 at 11:19 AM EST. 7 Comments
NVIDIA rolled out CUDA version 6 this morning, their latest major update to their Compute Unified Device Architecture for GPGPU / parallel programming. With CUDA 6, NVIDIA says its now simpler to achieve better parallel programming on the GPU.

NVIDIA claims developers can accelerate their applications by eight times by simply replacing CPU-based libraries with the CUDA-based alternatives. CUDA 6 provides unified memory support for accessing CPU and GPU memory without copies, new drop-in libraries for BLAS and FFTW calculations on the GPU, new multi-GPU scaling support, and various other changes.

More details on NVIDIA CUDA 6 can be found via the NVIDIA Newsroom. The unified memory support makes sense and was expected considering in a recent NVIDIA Linux driver update they introduced a new Unified Kernel Memory module for unifying the memory space between the GPU's video memory and the system's RAM.


For those that missed the earlier news items, NVIDIA is dropping 32-bit Linux CUDA support and we have a lot of new NVIDIA Linux GPU benchmarks coming!
About The Author
Author picture

Michael Larabel is the principal author of Phoronix.com and founded the web-site in 2004 with a focus on enriching the Linux hardware experience and being the largest web-site devoted to Linux hardware reviews, particularly for products relevant to Linux gamers and enthusiasts but also commonly reviewing servers/workstations and embedded Linux devices. Michael has written more than 10,000 articles covering the state of Linux hardware support, Linux performance, graphics hardware drivers, and other topics. Michael is also the lead developer of the Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org automated testing software. He can be followed via Twitter or contacted via MichaelLarabel.com.

Related NVIDIA News
Popular News
Trending Reviews & Featured Articles