Intel's MKL-DNN/DNNL 2.0 Beta 3 Release Adds SYCL + Data Parallel C++ Compiler
Intel's MKL-DNN Deep Neural Network Library (DNNL) that is open-source and catering to deep learning applications like Tensorflow, PyTorch, DeepLearning4J, and others is nearing its version 2.0 release. With DNNL 2.0 is now support for Data Parallel C++ as Intel's new language as part of their oneAPI initiative.
MKL-DNN/DNNL 2.0 Beta 3 was released on Wednesday and to my knowledge is their first public test release of the forthcoming 2.0. Notable with DNNL 2.0 is supporting SYCL API extensions and interoperability now with SYCL code, the single-source C++-based programming language backed by The Khronos Group and a crucial to Intel's new oneAPI initiative.
Beyond SYCL integration, the 2.0 release also adds Intel DPC++ compiler and run-time support. Data Parallel C++ is Intel's new direct programming language built for oneAPI and with support for multiple devices/accelerators in mind.
With the Data Parallel C++ build there is GPU acceleration but only for Intel at present. With oneAPI being based on Khronos standards and Intel trying to be open about it, support for other vendor GPUs should technically be possible but not apparently an effort at this point. "Non-Intel GPUs are not supported. The library API allows to create a DNNL engine by index (the order of devices is determined by the SYCL runtime), and there is no check for GPU devices being non-Intel. To have more control, users can create a DNNL engine passing SYCL device and context explicitly."
MKL-DNN/DNNL 2.0 Beta 3 is available from GitHub. Alongside the existing GCC OpenMP, Intel OpenMP, and Threading Building Blocks (TBB) builds is also now the DPC++ builds with Intel GPU acceleration support.
MKL-DNN/DNNL 2.0 Beta 3 was released on Wednesday and to my knowledge is their first public test release of the forthcoming 2.0. Notable with DNNL 2.0 is supporting SYCL API extensions and interoperability now with SYCL code, the single-source C++-based programming language backed by The Khronos Group and a crucial to Intel's new oneAPI initiative.
Beyond SYCL integration, the 2.0 release also adds Intel DPC++ compiler and run-time support. Data Parallel C++ is Intel's new direct programming language built for oneAPI and with support for multiple devices/accelerators in mind.
With the Data Parallel C++ build there is GPU acceleration but only for Intel at present. With oneAPI being based on Khronos standards and Intel trying to be open about it, support for other vendor GPUs should technically be possible but not apparently an effort at this point. "Non-Intel GPUs are not supported. The library API allows to create a DNNL engine by index (the order of devices is determined by the SYCL runtime), and there is no check for GPU devices being non-Intel. To have more control, users can create a DNNL engine passing SYCL device and context explicitly."
MKL-DNN/DNNL 2.0 Beta 3 is available from GitHub. Alongside the existing GCC OpenMP, Intel OpenMP, and Threading Building Blocks (TBB) builds is also now the DPC++ builds with Intel GPU acceleration support.
3 Comments