Intel MKL-DNN / DNNL 1.3 Released With Cooper Lake Optimizations

Written by Michael Larabel in Intel on 3 April 2020 at 04:41 AM EDT. 1 Comment
Intel on Thursday released version 1.3 of their Deep Neural Network Library (DNNL) formerly known as MKL-DNN in offering a open-source performance library for deep learning applications.

Notable to DNNL 1.3 are "broad release quality optimizations" for upcoming Intel Xeon "Cooper Lake" processors. The Cooper Lake optimizations are there, granted, it was recently revealed that Intel is only going to be offering Cooper Lake for quad/octal-socket Xeon Scalable platforms. For those with just single or dual socket platform interest, it's now Cascade Lake Refresh until Ice Lake Xeon CPUs ship in future quarters. With Cooper Lake there is BFloat16 support and presumably most of the DNNL 1.3 optimizations are centered on BF16 capabilities.

Besides the Cooper Lake optimizations, DNNL 1.3 also has improved performance for its matrix multiply "matmul" primitive for 3D tensors, better binary primitives for where tensors have to be broadcasted on all processors, and better performance of convolution primitives for 3D tensors.

DNNL 1.3 also has matmul primitive support for Intel processor graphics, new filters, extending existing primitives, and other enhancements.

More details on the MKL-DNN/DNNL 1.3 changes via GitHub. I'll be having some fresh DNNL CPU benchmarks up shortly.
Related News
About The Author
Michael Larabel

Michael Larabel is the principal author of and founded the site in 2004 with a focus on enriching the Linux hardware experience. Michael has written more than 20,000 articles covering the state of Linux hardware support, Linux performance, graphics drivers, and other topics. Michael is also the lead developer of the Phoronix Test Suite, Phoromatic, and automated benchmarking software. He can be followed via Twitter, LinkedIn, or contacted via

Popular News This Week