Originally posted by microcode
View Post
IEEE 754 half-precision was created as a balance between range and precision, whereas BFloat16 is all about range. IMO, that limits its potential for a great many uses. It's fine for deep learning, but not a whole lot else. I'd rather the industry stuck with existing half-precision.
Leave a comment: