Intel MKL-DNN 1.1 Released, Now Branded As The Deep Neural Network Library

Written by Michael Larabel in Intel on 3 October 2019 at 04:32 PM EDT. 3 Comments
Intel's open-source crew has had a busy week with their first public OpenVKL release, OSPray 2 hitting alpha, and now the release of MKL-DNN where they are also re-branding it as the Deep Neural Network Library (DNNL).

The MKL-DNN crew today did their version 1.1 release while now calling it the Deep Neural Network Library. MKL-DNN is the interesting Intel deep learning effort we've been benchmarking since earlier this summer and experienced good results. This performance-oriented library provides the "building blocks for neural networks optimized for Intel IA CPUs and GPUs." MKL-DNN/DNNL is designed to work with PyTorch, Tensorflow, ONNX, Chainer, BigDL, Apache MXNet, and other popular deep learning applications.

With version 1.1 they have added Intel Threading Building Blocks (TBB) support to complement their OpenMP threading, better int8 and FP32 GEMM performance for systems with AVX-512 and VNNI, improved RNN cell performance, and a wide variety of other changes to improve the capabilities of this performance-optimized deep learning library.

Developers wishing to learn more about MKL-DNN/DNNL 1.1 can do so via GitHub. Our benchmarking test profile has already been updated for v1.1 and will be running some DNNL 1.1 benchmarks shortly.
Related News
About The Author
Michael Larabel

Michael Larabel is the principal author of and founded the site in 2004 with a focus on enriching the Linux hardware experience. Michael has written more than 20,000 articles covering the state of Linux hardware support, Linux performance, graphics drivers, and other topics. Michael is also the lead developer of the Phoronix Test Suite, Phoromatic, and automated benchmarking software. He can be followed via Twitter, LinkedIn, or contacted via

Popular News This Week