GNU Gets Gas'ed Up For Intel BFloat16 Support
While Cascade Lake Xeon Scalable processors launched just this week, already with their successor "Cooper Lake" we are looking forward to Intel supporting the BFloat16 floating-point format designed for machine learning workloads. GNU's Gas now has assembler/disassembler support for BF16 instructions.
With Intel always working for punctual support for new CPU instructions in the open-source toolchain, they landed on Friday support for BF16 into the Binutils code-base for Gas, the GNU Assembler.
The work done by an Intel engineer enables assembler/disassembler support for AVX-512 BF16 support.
BFloat16 is also supported by Intel's Nervana NNP neural network processors in addition to their Cooper Lake Xeons and FPGAs. BF16 is designed to offer better performance than FP16 for deep learning workloads and any other workloads fitting its criteria of no support for denormals, hardware exception handling, and other bits as Intel outlined last year in the BF16 whitepaper.
With Intel always working for punctual support for new CPU instructions in the open-source toolchain, they landed on Friday support for BF16 into the Binutils code-base for Gas, the GNU Assembler.
The work done by an Intel engineer enables assembler/disassembler support for AVX-512 BF16 support.
BFloat16 is also supported by Intel's Nervana NNP neural network processors in addition to their Cooper Lake Xeons and FPGAs. BF16 is designed to offer better performance than FP16 for deep learning workloads and any other workloads fitting its criteria of no support for denormals, hardware exception handling, and other bits as Intel outlined last year in the BF16 whitepaper.
Add A Comment