Intel MKL-DNN 1.1 Released, Now Branded As The Deep Neural Network Library
Intel's open-source crew has had a busy week with their first public OpenVKL release, OSPray 2 hitting alpha, and now the release of MKL-DNN where they are also re-branding it as the Deep Neural Network Library (DNNL).
The MKL-DNN crew today did their version 1.1 release while now calling it the Deep Neural Network Library. MKL-DNN is the interesting Intel deep learning effort we've been benchmarking since earlier this summer and experienced good results. This performance-oriented library provides the "building blocks for neural networks optimized for Intel IA CPUs and GPUs." MKL-DNN/DNNL is designed to work with PyTorch, Tensorflow, ONNX, Chainer, BigDL, Apache MXNet, and other popular deep learning applications.
With version 1.1 they have added Intel Threading Building Blocks (TBB) support to complement their OpenMP threading, better int8 and FP32 GEMM performance for systems with AVX-512 and VNNI, improved RNN cell performance, and a wide variety of other changes to improve the capabilities of this performance-optimized deep learning library.
Developers wishing to learn more about MKL-DNN/DNNL 1.1 can do so via GitHub. Our benchmarking test profile has already been updated for v1.1 and will be running some DNNL 1.1 benchmarks shortly.
The MKL-DNN crew today did their version 1.1 release while now calling it the Deep Neural Network Library. MKL-DNN is the interesting Intel deep learning effort we've been benchmarking since earlier this summer and experienced good results. This performance-oriented library provides the "building blocks for neural networks optimized for Intel IA CPUs and GPUs." MKL-DNN/DNNL is designed to work with PyTorch, Tensorflow, ONNX, Chainer, BigDL, Apache MXNet, and other popular deep learning applications.
With version 1.1 they have added Intel Threading Building Blocks (TBB) support to complement their OpenMP threading, better int8 and FP32 GEMM performance for systems with AVX-512 and VNNI, improved RNN cell performance, and a wide variety of other changes to improve the capabilities of this performance-optimized deep learning library.
Developers wishing to learn more about MKL-DNN/DNNL 1.1 can do so via GitHub. Our benchmarking test profile has already been updated for v1.1 and will be running some DNNL 1.1 benchmarks shortly.
3 Comments