Home » Business » Intel, Arm, and NVIDIA have proposed a special number format for working with AI

Intel, Arm, and NVIDIA have proposed a special number format for working with AI

Intel, Arm, and NVIDIA have published a draft of the FP8 number format specification for 8-bit floating point numbers. As conceived by companies, this format should become a single representation of the numbers used in solving AI problems both in training neural networks and in their functioning (inference).


According to the companies, the use of 8-bit real numbers for calculating weights in neural networks will optimize the use of computer hardware resources. Such numbers take up less memory and are easier to process, which will increase the performance of hardware accelerators when troubleshooting AI problems.

The traditional floating point formats currently in use are FP32 (single precision) and FP16 (half precision). When solving machine learning problems, the second format is now predominantly used. However, according to Intel, Arm and NVIDIA, the numbers in an even shorter form, albeit with lower accuracy, are quite applicable in AI tasks, while they can be processed faster and with less energy.

For example, in a blog post, NVIDIA CMO Shar Narasimhan notes that the FP8 format exhibits “comparable fidelity” to 16-bit precision in applications such as computer vision and imaging systems, while providing significant acceleration. ” “.

The FP8 format will be available to all without a license, in open form. The specifications will later be submitted to the IEEE, an industry standards body for a number of technical areas. “We believe that the existence of a common format for data exchange will ensure rapid progress and compatibility of hardware and software platforms for the development of information technology.‘Narasimhan said.

It is worth noting that support for FP8 numbers is already implemented in NVIDIA’s GH100 Hopper architecture, as well as Intel Gaudi2 AI accelerators.

The unified FP8 format will benefit not only the three companies that have proposed the standard, but also other players offering accelerators for working with AI. One way or another, they all support their own versions of low-precision floating-point numbers, and the emergence of a single open standard instead of several competing formats will simplify the development of both hardware solutions and software libraries.

If you notice an error, select it with the mouse and press CTRL + ENTER.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.