HPC is like Black Gold to Oil and Gas Industry


Posted on: November 18, 2015 by Xavier Vigouroux

Despite its importance to everyday life, and even with the high involvement of politics in energy transition, there is still no short-term replacement for our use of fossil fuels. Oil and gas remain crucial elements in maintaining our current way of life: for transport and heating purposes, but also using the products derived from the raw material. Such is their importance that the economics of all citizens is directly impacted by the price of the barrel. This price depends on a lot of parameters but perhaps most crucially relies on the cost in producing each barrel.

To help keep margins healthy, the extraction of deposits has to be optimized. Worryingly, the return on energy invested is now decreasing for fossil fuels. Sinking a well now costs millions of dollars; so you’d better be sure that you’re drilling in the right place! Despite the hefty industry activity, there are still new deposits of fossil fuel that are yet to be detected.

Both of these issues are addressed by a scientific domain known as seismic imaging. A now common practice, organizations find competitive differentiation by developing, refining and finessing the algorithms used to characterize the subsurface. The more precise and efficient the algorithms then the more cost-effective the organisation’s drilling.

One such form of seismic imaging is Full Waveform Inversion (FWI). A powerful and data intensive method that is now able to be adopted across the industry thanks to developments in computational capacities and seismic data acquisition. Whereas 15 years ago FWI was considered by very few scientists, the Seiscope consortium was pioneering its development. The method is based on the comparison of receivers often placed at the surface between seismic waves which are simulated and measured. Measures are acquired during huge campaigns in which thousands of seismic waves are triggered. Nonetheless the simulation of the seismic wave in a 3D format is computationally intensive, but to get an accurate image of the subsurface the process has to be performed thousands of times. High Performance Computing is therefore, essential in the process.

Seiscope is managed by three of France’s public laboratories and sponsored by 13 Oil and Gas companies working together in quantitative seismic imaging with the aim of working on this common R&D endeavour to enhance their operations. 

Bull, the Atos brand for its technology products and software, with the help of its technology partner Intel, is helping the Seiscope consortium to optimize and modernize their codes of seismic imaging in preparation for the next HPC platforms (Bull Sequana) and solutions.

[caption id="attachment_9359" align="alignright" width="266"]Bull sequana fastest HPC supercomputer Bull Sequana fastest HPC supercomputer[/caption]

Some technology innovations will only benefit particular methods, for instance, IO intensive codes. And some technology innovations will be used differently amongst the numerical method.

Thus, the Centre for Excellence in Parallel Programming (CEPP) in Grenoble is working on emerging oil and gas algorithms and sharing its knowledge on computing platforms and solutions with Seiscope developers.

 

Credits go to Benjamin Pajot. He is the contributor of the content for this post. Benjamin Pajot is Senior HPC Applications & Performance Expert at Bull. A specialist in developing of numerical methods for high-resolution imaging of the Earth, applied at different scales. These methods use seismic and/or electromagnetic waves that propagate in the subsurface, measured during exploration experiments.

Share this blog article


About Xavier Vigouroux

Director of the Center For Excellence in Parallel programming,, Distinguished Expert and member of the Scientific Community
Xavier Vigouroux, after a PhD from Ecole normale Superieure de Lyon in Distributed computing, worked for several major companies in different positions. He has now been working for Bull for 9 years. He led the HPC benchmarking team for the first five years, then in charge of the "Education and Research " market for HPC, he is now managing the "Center for Excellence in Parallel Programming".