GNU Gets Gas'ed Up For Intel BFloat16 Support

Written by Michael Larabel in GNU on 6 April 2019 at 04:04 AM EDT. Add A Comment
While Cascade Lake Xeon Scalable processors launched just this week, already with their successor "Cooper Lake" we are looking forward to Intel supporting the BFloat16 floating-point format designed for machine learning workloads. GNU's Gas now has assembler/disassembler support for BF16 instructions.

With Intel always working for punctual support for new CPU instructions in the open-source toolchain, they landed on Friday support for BF16 into the Binutils code-base for Gas, the GNU Assembler.

The work done by an Intel engineer enables assembler/disassembler support for AVX-512 BF16 support.

BFloat16 is also supported by Intel's Nervana NNP neural network processors in addition to their Cooper Lake Xeons and FPGAs. BF16 is designed to offer better performance than FP16 for deep learning workloads and any other workloads fitting its criteria of no support for denormals, hardware exception handling, and other bits as Intel outlined last year in the BF16 whitepaper.
Related News
About The Author
Michael Larabel

Michael Larabel is the principal author of and founded the site in 2004 with a focus on enriching the Linux hardware experience. Michael has written more than 20,000 articles covering the state of Linux hardware support, Linux performance, graphics drivers, and other topics. Michael is also the lead developer of the Phoronix Test Suite, Phoromatic, and automated benchmarking software. He can be followed via Twitter, LinkedIn, or contacted via

Popular News This Week