Intel oneDNN 2.0 Deep Neural Network Library Working On More Performance Tuning
Intel's open-source oneDNN library, which was formerly known as MKL-DNN and DNNL for this deep neural network library now living under the oneAPI umbrella, continues working on some big performance advancements for its 2.0 release.
Intel on Thursday released oneDNN 2.0 Beta 7 and with it comes more Intel CPU performance optimizations around convolutional neural networks, binary primitive performance for the broadcast case, BFloat16 and FP32 weights gradient convolutions, INT8 convolutions with 1x1 kernel and spatial strides, and a variety of other specific areas within this deep learning library seeing optimizations.
This is also the first release beginning to see initial performance optimizations for Intel's Xe Graphics architecture - benefiting both the likes of Tiger Lake laptops and the DG1 discrete graphics card.
OneDNN 2.0 is also adding AArch64 support and other non-x86 processor support and a variety of other improvements.
More details on Thursday's oneDNN 2.0 Beta update via GitHub.
Intel on Thursday released oneDNN 2.0 Beta 7 and with it comes more Intel CPU performance optimizations around convolutional neural networks, binary primitive performance for the broadcast case, BFloat16 and FP32 weights gradient convolutions, INT8 convolutions with 1x1 kernel and spatial strides, and a variety of other specific areas within this deep learning library seeing optimizations.
This is also the first release beginning to see initial performance optimizations for Intel's Xe Graphics architecture - benefiting both the likes of Tiger Lake laptops and the DG1 discrete graphics card.
OneDNN 2.0 is also adding AArch64 support and other non-x86 processor support and a variety of other improvements.
More details on Thursday's oneDNN 2.0 Beta update via GitHub.
Add A Comment