Intel oneDNN 2.2 Released With More Optimizations For Alder Lake, Sapphire Rapids
This week Intel's open-source developers released version 2.2 of oneDNN, their deep neural network library that is part of their oneAPI offering after previously being developed under the names MKL-DNN and the Deep Neural Network Library (DNNL).
The oneDNN library provides the "building blocks" for deep learning applications not only for Intel CPUs/GPUs/XPUs but also for AMD / AArch64 / POWER / s390x processors and initial support for NVIDIA GPUs. With oneDNN 2.2, more work has gone into this deep learning library in preparing for future Intel products.
The oneDNN 2.2 release has better INT8 compute performance in preparation for Xeon Scalable "Sapphire Rapids", improved compute performance for Intel Alder Lake desktop CPUs with AVX2 and DL-Boost, improved FP32 inner product forward propagation performance for AVX-512 CPUs, and other optimizations. On the ARM front there is even performance improvements for use with the Arm Compute Library and SVE 512 optimizations.
The oneDNN 2.2 release also now supports NVIDIA cuDNN 8.x, initial support for the Fujitsu C++ compiler, and various new features being supported.
More details on the oneDNN 2.2 release via the project's GitHub.
Current Intel oneDNN performance benchmarks on a variety of processors can be found over on OpenBenchmarking.org.
The oneDNN library provides the "building blocks" for deep learning applications not only for Intel CPUs/GPUs/XPUs but also for AMD / AArch64 / POWER / s390x processors and initial support for NVIDIA GPUs. With oneDNN 2.2, more work has gone into this deep learning library in preparing for future Intel products.
The oneDNN 2.2 release has better INT8 compute performance in preparation for Xeon Scalable "Sapphire Rapids", improved compute performance for Intel Alder Lake desktop CPUs with AVX2 and DL-Boost, improved FP32 inner product forward propagation performance for AVX-512 CPUs, and other optimizations. On the ARM front there is even performance improvements for use with the Arm Compute Library and SVE 512 optimizations.
The oneDNN 2.2 release also now supports NVIDIA cuDNN 8.x, initial support for the Fujitsu C++ compiler, and various new features being supported.
More details on the oneDNN 2.2 release via the project's GitHub.
Current Intel oneDNN performance benchmarks on a variety of processors can be found over on OpenBenchmarking.org.
Add A Comment