BFloat16 Support About To Land Within LLVM
The LLVM compiler stack is about to merge its support for the BFloat16 floating-point format, including the BF16 C language support.
BFloat16 is the 16-bit number format designed for machine learning algorithms for lessened storage requirements and greater performance.
Arm has been pushing along the BFloat16 support for LLVM with ARMv8.6-A supporting the new format. But this BFloat16 LLVM support is also relevant ultimately for Intel AVX-512 BF16, Intel Nervana, Google Cloud TPUs, and other hardware coming out with BF16 support to bolster their machine learning capabilities.
BFloat support for the LLVM IR is under review and nearing the merging point along with the BFloat16 C type, IR intrinsics support, and the AArch32/AArch64 bits so far.
More details on this brain floating point support for LLVM via this mailing list recap of the currently pending patches on the Arm front.
BFloat16 is the 16-bit number format designed for machine learning algorithms for lessened storage requirements and greater performance.
Arm has been pushing along the BFloat16 support for LLVM with ARMv8.6-A supporting the new format. But this BFloat16 LLVM support is also relevant ultimately for Intel AVX-512 BF16, Intel Nervana, Google Cloud TPUs, and other hardware coming out with BF16 support to bolster their machine learning capabilities.
BFloat support for the LLVM IR is under review and nearing the merging point along with the BFloat16 C type, IR intrinsics support, and the AArch32/AArch64 bits so far.
More details on this brain floating point support for LLVM via this mailing list recap of the currently pending patches on the Arm front.
1 Comment