Drowning in Data: The Race for ExaScale Computing


Posted on: September 1, 2015 by Philippe Vannier

When visualising the rapid progress of technology, nothing provides as vivid a picture as the advances in supercomputing.

Measured in flops – an acronym for FLoating-point Operations Per Second – computer performance has moved quicker than we could ever imagine. Back in 1961, the cost of hardware capable of delivering a single gigaflop (a billion of operations per second) would have cost an equivalent of $8.3 trillion – nowadays an iPhone 6, for example, has a capacity of around 172 gigaflops!

Back in 2005 Bull built Tera10, a supercomputer cluster capable of providing a power of over 60 teraflops – or sixty thousand billion operations per second. Until 2008 it was one of the fastest machines in the world. Right now, that honour is held by the Tianhe-2, a machine capable of achieving 34 petaflops. To give you a sense of scale, even with each USA’s 320 million inhabitants making calculations at a rate of one per second, it would take more than 3 years to process the data that Tianhe-2 can crunch in less than a single second!

A Crucial Development

High performance computing (HPC) is vital for the modern age. With 40 trillion gigabytes of data expected to be generated by 2020 we’re going to need bigger and faster tools with which to process, analyse and make use of this information.

Now, working with French Alternative Energies and Atomic Energy Commission, CEA, we are working on Tera1000 – a range of supercomputers able to deliver an exaflop. It’s hard to quantify the power of such a machine: reaching a quintillion operations per second it would need over 5.8million iPhones working together to match its speed.

Bull - 2015

However, when it comes to building supercomputers we need to think about more than simply making them go faster. Issues over electricity consumption and power demands must also be placed front of mind. With a deadline of 2020 for the exaflops, our first prototype is set to be up and running in just two years and will have a computation capacity of 25 petaflops, and an electricity consumption 20 times lower than our previous Tera100 model (with respect to capacity).

Its potential impact is immeasurable. It would revolutionise the test and development process for any number of industries. The automobile industry, for example, widely uses a digital process to design its products. For the price of one physical crash test, researchers are able to perform hundreds of digital crashes – and these are far more accurate as every aspect could be measured and accounted for. Elsewhere, the power of Tera1000 could signal the end of animal testing or even offer insight into the way our brains work.

Ultimately, the need for significantly more processing power and speed is a perennial issue for IT. With our latest drive towards exascale computing it could someday be a thing of the past.

Share this blog article


About Philippe Vannier

Executive Vice President Big Data & Security Solutions and Group CTO
Philippe Vannier is Vice-President Executive Big Data & Security and CTO Atos Group. Philippe Vannier was CEO of the Bull Group until it was acquired by Atos in August 2014. He is also the Chairman and founder of Crescendo Industries, which he founded in 2004, Bull’s largest shareholder. Philippe Vannier is a graduate of ESPCI ParisTech and of INSEAD AMP and also a DEA Génie électrique et Instrumentation in Université Paris IV. He began his career at Michelin North America, then Cobham Group and Alcatel.

Follow or contact Philippe