Announcement

Collapse
No announcement yet.

Intel Publishes Whitepaper On New BFloat16 Floating-Point Format For Future CPUs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Publishes Whitepaper On New BFloat16 Floating-Point Format For Future CPUs

    Phoronix: Intel Publishes Whitepaper On New BFloat16 Floating-Point Format For Future CPUs

    Intel has published their initial whitepaper on BF16/BFloat16, a new floating point format to be supported by future Intel processors...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Meh. We don't need more specialized formats, we need one format to cover them all: https://posithub.org/about

    Comment


    • #3
      Originally posted by pegasus View Post
      Meh. We don't need more specialized formats, we need one format to cover them all: https://posithub.org/about
      There is a major benefit to this format. It is an IEEE 754 binary32 float with a truncated mantissa. The dynamic range is necessary for the proper functioning of certain algorithms, but the precision of true binary32 floats is not. This specialized format is uniquely suited to tasks like this, and no generic format can achieve this from the number/information theoretic perspective.

      Comment


      • #4
        Will AMD CPUs be legally able to support this?

        Comment


        • #5
          Originally posted by microcode View Post

          There is a major benefit to this format. It is an IEEE 754 binary32 float with a truncated mantissa. The dynamic range is necessary for the proper functioning of certain algorithms, but the precision of true binary32 floats is not. This specialized format is uniquely suited to tasks like this, and no generic format can achieve this from the number/information theoretic perspective.
          And it will look good in benchmarks: "Almost FP32 precision at FP16 perf!". And then the "almost" will some day silently be dropped.

          Comment


          • #6
            Originally posted by jacob View Post
            Will AMD CPUs be legally able to support this?
            Would Intel be legally able to use amd64 architecture in their CPUs? They do share CPU related patents.

            Comment


            • #7
              You could basically introduce that today, if the compiler can assume those things are not important (not supposed to be supported), they can optimize much better.

              Comment


              • #8
                Originally posted by jacob View Post
                Will AMD CPUs be legally able to support this?
                I thought I read it came from Google. Anyway, it's what they use in their TPU2's.

                I'm skeptical it's really any faster than half-precision floats, other than conversion to/from normal fp32. IMO, half-precision is generally more useful.

                Without denormals, even fp32 isn't very usable for many applications (hence, the popularity of fp64 for GPU compute).

                Comment


                • #9
                  This seems to be a long ways off. What I’d like to know is why hasn’t Intel or AMD defined a specialized processor core for these workloads? That is like Apple and other ARM developers have done with specialized ML accelerators.

                  Comment


                  • #10
                    Originally posted by mlau View Post
                    And it will look good in benchmarks: "Almost FP32 precision at FP16 perf!". And then the "almost" will some day silently be dropped.
                    Well, if they do that you can easily sue them on any shipment they represent that way. It is not FP32 precision, it is FP32 range, it's a specialized format. I highly doubt that this will be misrepresented egregiously and I don't really see why people think this is such a big deal. This is incredibly simple to do in hardware, and it has major benefits for these and some other workloads.

                    Comment

                    Working...
                    X