Intel Brings IBM POWER CPU Support To Their Deep Neural Network Library
Besides the code itself to Intel's oneAPI being open-source, the company is being surprisingly open about its support even for areas of usage outside of x86_64 CPUs.
In addition to the likes of getting Intel oneAPI / Data Parallel C++ on NVIDIA GPUs and other "open" efforts around APIs, they have shown willingness to see different oneAPI components working on non-x86_64 architectures.
Back in June saw the release of oneDNN 1.5 as their deep neural network library formally known as MKL-DNN and DNNL. With that release they added AArch64 (64-bit Arm) support.
Today they succeeded that by oneDNN 1.6 and with this new version is now IBM POWER CPU support. Granted, the code is likely to be far less tuned than x86_64 paths catering towards Intel heavy AVX usage, but it's nice to see the official source trees supporting these alternative CPU architectures.
The oneDNN 1.6 release also adds INT8 optimizations for Intel Xeon Sapphire Rapids, continued work on BFloat16 optimizations for Cooper Lake, and a variety of other optimizations. There are also a number of optimizations in this release benefiting the Intel graphics support.
Download links with binaries for all major platforms plus other changes in oneDNN 1.6 can be found via GitHub.
A number of other Intel oneAPI components are also seeing releases today, more details on those other prominent changes for Intel's open-source software stack to come.
In addition to the likes of getting Intel oneAPI / Data Parallel C++ on NVIDIA GPUs and other "open" efforts around APIs, they have shown willingness to see different oneAPI components working on non-x86_64 architectures.
Back in June saw the release of oneDNN 1.5 as their deep neural network library formally known as MKL-DNN and DNNL. With that release they added AArch64 (64-bit Arm) support.
Today they succeeded that by oneDNN 1.6 and with this new version is now IBM POWER CPU support. Granted, the code is likely to be far less tuned than x86_64 paths catering towards Intel heavy AVX usage, but it's nice to see the official source trees supporting these alternative CPU architectures.
The oneDNN 1.6 release also adds INT8 optimizations for Intel Xeon Sapphire Rapids, continued work on BFloat16 optimizations for Cooper Lake, and a variety of other optimizations. There are also a number of optimizations in this release benefiting the Intel graphics support.
Download links with binaries for all major platforms plus other changes in oneDNN 1.6 can be found via GitHub.
A number of other Intel oneAPI components are also seeing releases today, more details on those other prominent changes for Intel's open-source software stack to come.
7 Comments