
The reason why the Top 500 list is measured in 64-bits and not 32 or 16 is because the list is a proxy for scientific computing applications, and these applications continue to rely primarily on 64-bit accuracy in their calculations, says Addison Snell, CEO of Intersect 360, a market research firm specializing in HPC and supercomputing.
“There’s always going to be conversations about could we make things faster by reducing precision in some areas, or are you doing an awful lot of calculation in areas where it doesn’t matter, but 64-bit is still the de rigueur standard in scientific computing,” Snell said.
FP64 is used in just about every field of scientific computing, Snell said. On the research side, that includes weather simulation or ocean modeling, which have fluids and require a high degree of accuracy to model.
It applies to a wide range of commercial applications, such as crash simulations of cars, aerodynamic analysis of an airplane wing, seismic analysis of where to drill for oil, or molecular modeling in designing a new drug. All of these are applications that require a high degree of scientific precision in order to calculate, Snell said.
The next step down is FP32, or single precision floating point. It is also used in life sciences simulations as well as financial modeling and is usually used when the demands of the model are not as stringent and the person running the model can get away with using FP32.
FP16 has found a regular use for AI inferencing, whereas AI training is fully dependent on FP64. The reason is simple: Let’s say you were using AI to learn to recognize the image of a dog or a cat. You want the fine precision of FP64 to recognize the features that constitute a dog or a cat. Once it’s trained, then it’s simply a pattern matching effort, and the less demanding FP16 is sufficient for recognizing is this image a dog or a cat, Snell said.
