High-performance computing (HPC) simulations are providing unparalleled insights into new scientific discoveries and are essential tools for industrial product design. HPC technology development over the last decades has been fueled by the scientific and engineering communities’ unquenchable thirst for more and more computing power. Exascale has been in everyone’s mind ever since the first Petaflop system was deployed in 2008. The target was clear: 1 exaflopswithin a 20-megawatt (MW) power envelop by 2020. As of today, a first Exascale system has been installed in the USA and more coming around the globe while post-Exascale supercomputers are already planned. The focus is clearly on meaningful application performance (Exascale = exaflops delivered to HPC applications) and this still entails multiple challenges well beyond raw hardware performance (exaflops).
The HPC applications have evolved to deliver more performance through unprecedented levels of parallelism, but also with new techniques. Noticeably, (Big) data analysis was introduced to refine computer models through the mining of real-life physical observations. More recently, artificial intelligence (AI) frameworks made possible the use of surrogate models, thus drastically accelerating a significant range of HPC applications, and considerably improving the quality of the simulations.
The diversity challenge
Concurrently to the evolution of HPC applications software, HPC hardware architecture has also changed significantly. Ten years ago, the HPC ecosystem looked quite uniform, with most supercomputers based on x86 CPUs. By contrast, today’s supercomputer architectures are quite diverse. HPC systems are now commonly composed of several partitions, each featuring different types of computing/processing nodes. This unprecedented wave of innovations in processors technology presents for developers the opportunity to boost HPC applications performance, while at the same time tackling the challenge of such heterogeneous environments.
Energy efficiency at Exascale
Even though each new generation of computing elements is delivering more performance per Watt thanks to new architecture and advances in electronic manufacturing, the overall consumption of Exascale systems is nevertheless reaching costly levels. The supercomputer and datacenter utilities, and most importantly the cooling system, must be carefully optimized. Additionally, GPUs and CPUs consumption has been growing steadily. Improving on previous generations, the newly introduced BullSequana XH3000 platform greatly expands the power supply and cooling capacities for each rack. As a result, a higher inlet temperature is admissible, and the datacenter free-cooling range is further extended. On average, DLC reduces by 40% the HPC datacenters global electricity bill.
The future of exascale is hybrid: the role of quantum and AI
Atos sees the importance of the future coupling of HPC and Quantum computing. Within the framework of the EuroHPC HPCQS project, a first prototype will allow researchers to explore these possibilities. The Atos QLM (Quantum Learning Machine) software environment will ensure a smooth integration of the Quantum computing with the HPC platform.
By Jean-Pierre Panziera, Chief Technology Officer for High Performance Computing, Atos
Posted on: November 30, 2022