Announcement

Collapse
No announcement yet.

BFloat16 Support About To Land Within LLVM

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • BFloat16 Support About To Land Within LLVM

    Phoronix: BFloat16 Support About To Land Within LLVM

    The LLVM compiler stack is about to merge its support for the BFloat16 floating-point format, including the BF16 C language support...

    http://www.phoronix.com/scan.php?pag...M-Landing-Soon

  • #2
    Good for deep learning, and very little else.

    The half-float format, specified in IEEE 754-2008 and supported by Intel iGPUs since Broadwell, by AMD since Vega, and Nvidia since GP100 (Pascal) is a better all-around compromise between range and precision. But, it's not as good for deep learning, and not as efficient to implement in hardware.

    There are also conversion instructions between fp32 and fp16 in both Ivy Bridge and later CPUs (i.e. F16C) and in AVX-512's VCVT group (VCVTPH2PS, VCVTPS2PH).
    Last edited by coder; 05-13-2020, 12:50 PM.

    Comment

    Working...
    X