Addressing the major supercomputing challenges

Posted on: Feb 02, 2015 by Xavier Vigouroux

As we look to take the next industrial, scientific and societal steps forward, supercomputers are playing an increasingly integral role. What are the issues we need to overcome to take supercomputing to the next level?

Power consumption

Managing energy consumption is a key priority for supercomputer engineers to address these machines’ insatiable appetite for energy. Today’s supercomputers consume over 10MW, which is equivalent to the power needed to supply 30,000 households with electricity! Also, the price of electricity is high across Europe – every year, it costs approximately €1m for each MW consumed.

This is driving European supercomputer owners to look for more energy-efficient solutions and in the future, we’ll see machines being able to process many more billions of operations per second while consuming the same amount of energy as they do today (10MW). To keep costs down for future supercomputers, environmental standards are being set by bodies such as the Department of Energy, which has fixed an upper limit of 20MW for new Exascale system designs.

Internally, we are looking at this challenge by working with Technische Universität Dresden on a research project (called High Definition Energy Efficiency Monitoring or HDEEM) and measuring the power consumption of High Performance Computing (HPC) machines. Using a software-based measurement database, the energy efficiency of each machine can be monitored every two milliseconds, enabling researchers to analyse and improve parallel HPC user codes.

Improved cooling

Supercomputers require cooling systems to run correctly, which also consume lots of energy. In fact, cooling systems often increase the electricity bill by 50% - 75%. As a result, tech giants are turning their attention to the Arctic Circle, with Google investing over $1bn in its Hamina data centre in Finland. With access to geothermal and hydroelectric energy, free cooling and millisecond connectivity to America and Europe, Iceland (pictured) is also becoming an increasingly popular location for data centres.

We are using a warm water-based cooling system, which is recommended by the European Commission as a target solution for future large-scale computing centres. Traditional cooling mechanisms require refrigeration units, whereas keeping the water at around 35°C makes it possible to use a free cooling system throughout the year.

Supercomputer skills gap

Another concern for the European supercomputer community is the lack of individuals with the software skills required for supercomputing programming. This is an issue we have looked to address by launching our Centre for Excellence in Parallel Programming (CEPP), the first European center of industrial and technical excellence of its kind. As part of the CEPP, we’ve trained our 300-strong engineering team to support users in improving the efficiency of their supercomputers.

Developing new microprocessors

Five years ago, it would be easy to define what a supercomputer was – thousands of nodes tightly coupled together using a standard Central Processing Unit (CPU) processor. But these systems simply aren’t fast enough to cope with processing the billions of operations per second required by the most powerful supercomputers today, and as a result, new microprocessors are being developed. Graphic processors (GPUs), powerful but complex architectures which are normally used in video gaming, are becoming a popular choice for those looking to engineer higher performance machines. Others are instead focusing on parallelism, which enables billions of calculations to be carried out simultaneously!

In our next post, we’ll be looking ahead to the future of supercomputing, questioning whether we’ll ever reach a limit on processing power.

Share this blog article

About Xavier Vigouroux

Director of the Center For Excellence in Parallel programming,, Distinguished Expert and member of the Scientific Community
Xavier Vigouroux, after a PhD from Ecole normale Superieure de Lyon in Distributed computing, worked for several major companies in different positions. He has now been working for Bull for 9 years. He led the HPC benchmarking team for the first five years, then in charge of the "Education and Research " market for HPC, he is now managing the "Center for Excellence in Parallel Programming".