Intel Habana Labs SynapseAI Core Updated With Gaudi2 Support
While these days the Intel-owned Habana Labs Linux software stack is a shining example of an open-source AI accelerator solution with mainline kernel driver support and also helping bring together the new compute accelerator subsystem, it wasn't always so blessed. Initially there was the closed-source user-space bits that fortunately last year was opened up with SynapseAI Core.
Prior to even being owned by Intel, Habana Labs was a good Linux kernel contributor for their "habanalabs" char/misc kernel driver providing support for their Gaudi and Goya accelerators but where the friction arose was being dependent upon the closed-source user-space, particularly around its compiler. This was thankfully rectified with the release of SynapseAI Core as open-source.
The SynapseAI Core reference user-space implementation for interacting with the SynapseAI API of the Habana Gaudi. This open-source code implements all of the relevant APIs, contains a back-end library for code execution with the kernel driver, a library with sample Tensor Processing Core (TPC) kernels, and other relevant code. Via Habana's "tpc_llvm" GitHub repository is also their TPC LLVM compiler.
What's notable this Sunday is now having the SynapseAI Core updated with support for Gaudi2. Going back to earlier this year Intel has been working on the Gaudi2 kernel driver support building off the first-generation Gaudi code within the habanalabs kernel driver. Now the Gaudi2 reference user-space code is also out in released form.
SynapseAI_Core 1.1.0 was tagged this morning and adds support for the Gaudi2 accelerator to go along with the latest Linux kernel hardware support.
Gaudi2 as a reminder is intended for training deep learning workloads with massive performance uplift over the original Gaudi. Intel / Habana Labs has promoted Gaudi2 as offering around ~2x the throughout of the NVIDIA A100 for ResNet-50 and BERT. Those interested can learn more on the hardware at habana.ai.
Prior to even being owned by Intel, Habana Labs was a good Linux kernel contributor for their "habanalabs" char/misc kernel driver providing support for their Gaudi and Goya accelerators but where the friction arose was being dependent upon the closed-source user-space, particularly around its compiler. This was thankfully rectified with the release of SynapseAI Core as open-source.
The SynapseAI Core reference user-space implementation for interacting with the SynapseAI API of the Habana Gaudi. This open-source code implements all of the relevant APIs, contains a back-end library for code execution with the kernel driver, a library with sample Tensor Processing Core (TPC) kernels, and other relevant code. Via Habana's "tpc_llvm" GitHub repository is also their TPC LLVM compiler.
What's notable this Sunday is now having the SynapseAI Core updated with support for Gaudi2. Going back to earlier this year Intel has been working on the Gaudi2 kernel driver support building off the first-generation Gaudi code within the habanalabs kernel driver. Now the Gaudi2 reference user-space code is also out in released form.
SynapseAI_Core 1.1.0 was tagged this morning and adds support for the Gaudi2 accelerator to go along with the latest Linux kernel hardware support.
Gaudi2 as a reminder is intended for training deep learning workloads with massive performance uplift over the original Gaudi. Intel / Habana Labs has promoted Gaudi2 as offering around ~2x the throughout of the NVIDIA A100 for ResNet-50 and BERT. Those interested can learn more on the hardware at habana.ai.
2 Comments