Intel oneDNN 2.0 Deep Neural Network Library Working On More Performance Tuning

Written by Michael Larabel in Intel on 3 July 2020 at 12:07 AM EDT. Add A Comment
INTEL
Intel's open-source oneDNN library, which was formerly known as MKL-DNN and DNNL for this deep neural network library now living under the oneAPI umbrella, continues working on some big performance advancements for its 2.0 release.

Intel on Thursday released oneDNN 2.0 Beta 7 and with it comes more Intel CPU performance optimizations around convolutional neural networks, binary primitive performance for the broadcast case, BFloat16 and FP32 weights gradient convolutions, INT8 convolutions with 1x1 kernel and spatial strides, and a variety of other specific areas within this deep learning library seeing optimizations.

This is also the first release beginning to see initial performance optimizations for Intel's Xe Graphics architecture - benefiting both the likes of Tiger Lake laptops and the DG1 discrete graphics card.

OneDNN 2.0 is also adding AArch64 support and other non-x86 processor support and a variety of other improvements.

More details on Thursday's oneDNN 2.0 Beta update via GitHub.
Related News
About The Author
Michael Larabel

Michael Larabel is the principal author of Phoronix.com and founded the site in 2004 with a focus on enriching the Linux hardware experience. Michael has written more than 20,000 articles covering the state of Linux hardware support, Linux performance, graphics drivers, and other topics. Michael is also the lead developer of the Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org automated benchmarking software. He can be followed via Twitter, LinkedIn, or contacted via MichaelLarabel.com.

Popular News This Week