The Leonardo system will contain nearly 14,000 A100 GPUs and, when completed, could help with drug discovery, weather modeling and space exploration.
Nvidia has embarked on a large-scale project in the company of an Italian university consortium and computer center called Cineca: their announcement says they will build the world’s fastest artificial intelligence-based computer. The Leonardo system will feature nearly 14,000 Nvidia A100 GPUs and peak performance will be 10 FP16 ExaFLOPS. FLOPS stands for “floating point operations per second” and indicates the number of most accurate operations performed per second, while FP16 refers to 16-bit operations. The supercomputer is based on the Atos BullSequana XH2000 supercomputer nodes, each with an Intel Xeon processor, four Nvidia A100 GPUs and a Mellanox HDR 200Gb / s InfiniBand network card. They are equipped with liquid cooling and four of them are placed in each HPC (high performance computing) cabinet. The architecture of the BullSequana XH2000 is very flexible, it can accommodate almost any CPU or GPU – it even facilitates later scalability and convertibility. Staff at Italian universities want to use Leonardo for drug research, weather modeling, and space exploration. Such applications typically require FP64 accuracy, but Nvidia says today’s HPC tasks are more based on AI and machine learning, for which FP16 accuracy is sufficient.