Heterogeneous Memory Management v19 Published, Will It Be In Linux 4.12?
Jerome Glisse has published his latest massive patch-set for supporting Heterogeneous Memory Management within the mainline Linux kernel.
HMM v19 as the newest version of this long-standing work has more code fixes, some improved code comments, and a variety of other internal code changes. As explained previously, HMM has been a multi-year effort to allow device memory to be transparently used by any device process and for mirroring process address space on a device. This has big implications for modern graphics hardware and other devices like FPGAs. NVIDIA has been backing the HMM Linux efforts for their binary graphics driver and have also been working on support for their open-source driver.
Jerome commented more in the v19 patches:
Jerome has previously expressed interest in getting HMM into Linux 4.12. We'll see if that happens in the weeks ahead.
HMM v19 as the newest version of this long-standing work has more code fixes, some improved code comments, and a variety of other internal code changes. As explained previously, HMM has been a multi-year effort to allow device memory to be transparently used by any device process and for mirroring process address space on a device. This has big implications for modern graphics hardware and other devices like FPGAs. NVIDIA has been backing the HMM Linux efforts for their binary graphics driver and have also been working on support for their open-source driver.
Jerome commented more in the v19 patches:
This feature will be use by upstream driver like nouveau mlx5 and probably other in the future (amdgpu is next suspect in line). We are actively working on nouveau and mlx5 support. To test this patchset we also worked with NVidia close source driver team, they have more resources than us to test this kind of infrastructure and also a bigger and better userspace eco-system with various real industry workload they can be use to test and profile HMM.More details via the patch series.
The expected workload is a program builds a data set on the CPU (from disk, from network, from sensors, …). Program uses GPU API (OpenCL, CUDA, ...) to give hint on memory placement for the input data and also for the output buffer. Program call GPU API to schedule a GPU job, this happens using device driver specific ioctl. All this is hidden from programmer point of view in case of C++ compiler that transparently offload some part of a program to GPU. Program can keep doing other stuff on the CPU while the GPU is crunching numbers.
Jerome has previously expressed interest in getting HMM into Linux 4.12. We'll see if that happens in the weeks ahead.
Add A Comment