Intel's oneDNN Preps For Sierra Forest & Granite Rapids, Lands More Optimizations

Written by Michael Larabel in Intel on 14 February 2024 at 06:16 AM EST. Add A Comment
INTEL
Intel's oneDNN Deep Neural Network Library used for building deep learning applications is preparing another release that continues going heavy on performance optimizations and preparing for future Intel hardware generations.

The oneDNN library is used for building deep learning software and is relied upon by softwate like the ONNX Runtime, OpenVINO, Apache MXNet, Apache SIGNA, and via extensions optionally with PyTorch / TensorFlow / MATLAB / PaddlePaddle and others. The oneDNN library works across multiple CPU architectures -- those is extensively tuned the most for Intel architectures -- as well as supporting Intel GPUs as well as those from other vendors. The oneDNN 3.4 release candidate was issued on Monday with yet more improvements.

The oneDNN 3.4 release is preparing more performance improvements for Xeon Scalable Sapphire Rapids while also working on new improvements for upcoming Sierra Forest and Granite Rapids processors. There are also various AVX2, AVX-512, and AMX optimizations with this release.

Intel Xeon Max


Over on the GPU side there is better performance for Intel Arc Graphics as well as Intel Data Center GPU Max Series. Plus various other GPU performance tuning in general.

Downloads and more details on the Intel oneDNN 3.4 release candidate via GitHub.
Related News
About The Author
Michael Larabel

Michael Larabel is the principal author of Phoronix.com and founded the site in 2004 with a focus on enriching the Linux hardware experience. Michael has written more than 20,000 articles covering the state of Linux hardware support, Linux performance, graphics drivers, and other topics. Michael is also the lead developer of the Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org automated benchmarking software. He can be followed via Twitter, LinkedIn, or contacted via MichaelLarabel.com.

Popular News This Week