From an email that Kato wrote me a few days back, "My research project has developed a fully open-source set of the device driver and runtime library for CUDA. We still need an NVIDIA's compiler (nvcc) to compile programs (Gallium3D is not yet ready for CUDA/OpenCL), but nothing else is required. In fact, NVIDIA has released an open-source compiler, hence all pieces of CUDA software stack are now open-source!"
He's published Gdev onto GitHub and will be presenting this project at USENIX ATC 2012. He additionally shares with us, "There are many nice features included in the project. For instance, you can run CUDA programs in the operating system directly, and enhance the performance of file encryption and RAID5/6 parity encoding/decoding. You can also virtualize one GPU into several, so the time-sharing server can now install the GPU for multiple users. POSIX-based IPC (shared memory) is also enabled for multiple GPU contexts, which eliminates the overhead of moving data between the host and device back and forth. We have lots of extensions planed also, but they will come to appear with other academic publications later in this year."
Here's some other details he shared on his Gdev project:
- Gdev is composed of the kernel module and several runtime libraries (low-level Gdev runtime and high-level CUDA runtime).
- Gdev is independent of the device driver - it is a first-class GPU resource management component.
- Gdev is available with both the Nouveau and PSCNV open-source drivers.
- Gdev has two versions of runtime. One runs in the Linux kernel, while the other runs in user-space. This means that exactly the same CUDA program can run both in the Linux kernel and user-space! To be specific, Gdev contains "kcuda.ko", which is a kernel module that exports CUDA API functions to the Linux kernel. This runtime-unified OS approach is one of the notable features of Gdev.
- Since Gdev can have GPGPU runtime in the Linux kernel, it can prevent user-space programs from managing GPU resources directly by poking ioctl() commands. Under Gdev, GPU resource usage can be enforced by the Linux kernel - hence more dependable/secure.
- The absolute performance of Gdev is competitive to NVIDIA's proprietary software.
- Gdev provides some extensions to CUDA. A notable extension is shared device memory support, i.e., it provides an additional set of CUDA Driver API, cuShmGet(), cuShmAt(), cuShmDt(), and cuShmCtl(), which play a similar role to POSIX shmget(), shmat(), shmdt(), and shmctl(). This shared memory functionality would be very powerful in multi-tasking environments.
- Gdev also supports direct data transfer between an I/O device and the GPU. An additional set of CUDA Driver API, cuMemMap() and cuMemUnmap(), can map and unmap the device memory to and from the user buffer, and cuMemGetPhysAddr() tells the physical bus address directly accessible to the I/O device. All the I/O device has to do is to set up its DMA engine or so to send data to the obtained bus address. Then the data is directly transferred to the device memory.
- Gdev can virtualize the GPU into multiple instances. If some user group wants the GPU to be isolated from other user groups, the virtualized GPU is very useful.
- Gdev has lots of additional fancy scheduling and memory management techniques. Interested users are encouraged to read the Gdev paper, which will appear at USENIX ATC 2012.
- Gdev does not fully support CUDA yet. It doesn't support OpenCL also. But most Rodinia CUDA benchmark programs can run on top of Gdev with its limited set of CUDA functions. We plan to add more support for CUDA and OpenCL anyway.
- Gdev provides abstracted context objects, memory objects, address space objects, etc. for the GPU. This means that Gdev could be ported to other GPU architectures. In fact, code is nicely separate for architecture-dependent and independent parts. Porting Gdev to Nouveau, for instance, needed only about 400 lines of code.
- This open-source implementation of Gdev will particularly facilitate research on GPGPU!
This open-source project sounds extremely interesting and I look forward to seeing its progression and hopeful adoption. Other information can be learned from its GitHub.