Intel oneDNN 2.2 Released With More Optimizations For Alder Lake, Sapphire Rapids
Written by Michael Larabel in Intel on 3 April 2021 at 12:00 AM EDT. Add A Comment
INTEL --
This week Intel's open-source developers released version 2.2 of oneDNN, their deep neural network library that is part of their oneAPI offering after previously being developed under the names MKL-DNN and the Deep Neural Network Library (DNNL).

The oneDNN library provides the "building blocks" for deep learning applications not only for Intel CPUs/GPUs/XPUs but also for AMD / AArch64 / POWER / s390x processors and initial support for NVIDIA GPUs. With oneDNN 2.2, more work has gone into this deep learning library in preparing for future Intel products.

The oneDNN 2.2 release has better INT8 compute performance in preparation for Xeon Scalable "Sapphire Rapids", improved compute performance for Intel Alder Lake desktop CPUs with AVX2 and DL-Boost, improved FP32 inner product forward propagation performance for AVX-512 CPUs, and other optimizations. On the ARM front there is even performance improvements for use with the Arm Compute Library and SVE 512 optimizations.

The oneDNN 2.2 release also now supports NVIDIA cuDNN 8.x, initial support for the Fujitsu C++ compiler, and various new features being supported.

More details on the oneDNN 2.2 release via the project's GitHub.

Current Intel oneDNN performance benchmarks on a variety of processors can be found over on OpenBenchmarking.org.
Related News
About The Author
Author picture

Michael Larabel is the principal author of Phoronix.com and founded the site in 2004 with a focus on enriching the Linux hardware experience. Michael has written more than 20,000 articles covering the state of Linux hardware support, Linux performance, graphics drivers, and other topics. Michael is also the lead developer of the Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org automated benchmarking software. He can be followed via Twitter or contacted via MichaelLarabel.com.

Popular News This Week