Intel MKL-DNN/DNNL 1.2 Released With Performance Improvements For Deep Learning On CPUs
Intel on Friday released Deep Neural Network Library (DNNL) version 1.2, formerly known as MKL-DNN. With this release comes both new features and better performance.
On the performance front, Intel DNNL 1.2 brings better int8 inference on pre-AVX512 hardware while int8 inference is also boosted for 3D spatial data on all CPUs. Int8 inference is also supported on GPUs with this release. There is also better performance on DNNL 1.2 when it comes to 1D backward convolutions.
Intel DNNL 1.2 also introduces a general purpose matrix-matrix multiplication primitive and a variety of other primitives.
Downloads and more details on the Deep Neural Network Library 1.2 via GitHub. Fresh DNNL 1.2 benchmarks coming up soon on Phoronix.
On the performance front, Intel DNNL 1.2 brings better int8 inference on pre-AVX512 hardware while int8 inference is also boosted for 3D spatial data on all CPUs. Int8 inference is also supported on GPUs with this release. There is also better performance on DNNL 1.2 when it comes to 1D backward convolutions.
Intel DNNL 1.2 also introduces a general purpose matrix-matrix multiplication primitive and a variety of other primitives.
Downloads and more details on the Deep Neural Network Library 1.2 via GitHub. Fresh DNNL 1.2 benchmarks coming up soon on Phoronix.
Add A Comment