Arm Talks Up Their BFloat16 / BF16 Support For Upcoming Processors
While we've known Arm would be adding BFloat16 (BF16) support to their future processor designs, on Thursday they publicly provided more details on their plans for this new floating-point format to help AI / machine learning workloads with training and inference.
With the next revision to ARMv8-A will come Neon and SVE vector instructions for select computations using the BFloat16 floating-point number format. For nearly the past year we have seen Intel prepping the Linux/open-source ecosystem for BFloat16 and its support with their upcoming Cooperlake support for BF16. It's looking now like Arm might beat AMD in to supporting BF16 on their processor designs.
Arm is seeing significant performance benefits to BFloat16 extensions, particularly for better machine learning training and inference tasks. More details on the Arm BF16 plans via this community.arm.com blog post.
With the next revision to ARMv8-A will come Neon and SVE vector instructions for select computations using the BFloat16 floating-point number format. For nearly the past year we have seen Intel prepping the Linux/open-source ecosystem for BFloat16 and its support with their upcoming Cooperlake support for BF16. It's looking now like Arm might beat AMD in to supporting BF16 on their processor designs.
Arm is seeing significant performance benefits to BFloat16 extensions, particularly for better machine learning training and inference tasks. More details on the Arm BF16 plans via this community.arm.com blog post.
36 Comments