Announcement

Collapse
No announcement yet.

GCC & LLVM Ready With x86 __bf16 Type Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • GCC & LLVM Ready With x86 __bf16 Type Support

    Phoronix: GCC & LLVM Ready With x86 __bf16 Type Support

    Following optional "__bf16" support being added to the x86-64 psABI as a special type for representing 16-bit Brain Floating Point Format for deep learning / machine learning applications, the GCC and LLVM compilers have now landed their __bf16 type support...

    https://www.phoronix.com/news/GCC-LL...-BFloat16-Type

  • #2
    BFP is Binary Floating Point, normally based on the IEEE754 standard. Other historic binary formats (Microsoft Basic) are largely obsolete.

    Some platforms (such as IBM zSeries mainframe) also support other float representations - packed or zoned decimal, or hexadecimal.

    Comment


    • #3
      Originally posted by AlDunsmuir View Post
      BFP is Binary Floating Point, normally based on the IEEE754 standard. Other historic binary formats (Microsoft Basic) are largely obsolete.
      In this particular case, the "BF" in BF16 / bfloat16 stands for "brain floating-point format", and is not an official IEEE754 format. It's not the same as the IEEE754 (16-bit) half-precision format. Instead it has a larger exponent, matching the exponent range of the IEEE754 32-bit float, and conversely has (much!) fewer significand bits so precision is lower than the IEEE754 16-bit format.

      The motivation is that deep learning applications are often quite insensitive to precision, but obviously you don't want your values to overflow. So by having the same exponent range as standard 32-bit floats, neural net architectures that have been developed and verified with 32-bit floats should be easy to port to bfloat16. Also the hardware circuitry for converting between 32-bit floats and bfloat16 is simplified as the exponent bits can just be copied as-is without needing to check for overflow or underflow (subnormal handling might require some extra work, if that is to be supported in conversions).

      See https://en.wikipedia.org/wiki/Bfloat...g-point_format

      Comment


      • #4
        "it is used to add, subtract, multiply or divide, but the result of the calculation is actually meaningless"
        Your normal 'deep learning' application, no?

        Comment


        • #5
          Thanks for the clarification!

          I do wonder why they could not use a different acronym for this variation of the format. Overloading BFP, especially given multiple 16-bit formats just seems like asking for confusion.

          Comment


          • #6
            Originally posted by AlDunsmuir View Post
            Thanks for the clarification!

            I do wonder why they could not use a different acronym for this variation of the format. Overloading BFP, especially given multiple 16-bit formats just seems like asking for confusion.
            no one uses BFP (I've never heard of it before) so no issue. People write "binary floating-point" when the base is required, otherwise in most current contexts "floating-point" typically implies the use of binary as the base

            Comment


            • #7
              Originally posted by phuclv View Post

              no one uses BFP (I've never heard of it before) so no issue. People write "binary floating-point" when the base is required, otherwise in most current contexts "floating-point" typically implies the use of binary as the base
              Not quite no one - it all depends on the platforms and operating systems with which one has familiarity.

              I'm primarily a C programmer on z/OS.

              The z/Series hardware added support for IEEE 754 binary floating point in the mid 1990s - until then there was the traditional hexadecimal-based floating point format (see Wikipedia article "IBM hexadecimal floating-point"), and packed and zoned numbers (primarily used by COBOL).

              The compiler documentation (and options) use the acronyms BFP and HFP for the first two formats. I was in the IBM z/OS compiler group when BFP support was introduced, and even spent some time working on the AIX compiler's IEEE-754 emulation (used for constant folding).

              The IEEE 754 standard was later expanded to include decimal floating point under IEEE 754-2008. The Wikipedia article "decimal floating point" title shows DFP as an acronym for this format.

              Al
              Last edited by AlDunsmuir; 21 August 2022, 01:34 PM.

              Comment

              Working...
              X