Le HPC est partout
Le HPC est devenu un élément incontournable de nos vies quotidiennes, même si nous n’en sommes pas toujours conscients. Le HPC est aujourd’hui utilisé dans tous les secteurs, pour nos médicaments, pour nos téléphones portables, pour l’industrie du film, pour les équipements de nos athlètes, pour la fabrication des voitures et l’optimisation du carburant utilisé. Le HPC a un impact direct sur notre qualité de vie, rendant notre vie plus sûre, avec des prévisions météorologiques plus fiables et plus précises, une meilleure anticipation des catastrophes naturelles. Tous les secteurs de l’industrie et de la recherche s’appuient sur le calcul haute performance. Découvrez ici quelques exemples de ce que les supercalculateurs BullSequana peuvent apporter à leurs utilisateurs – et peut-être comment vos recherche ou votre activité peuvent bénéficier du HPC.
Deep learning is the capacity for a computer to recognize specific representations (images, texts, videos, sounds) after being shown many examples of these representations. For example, after being introduced to thousands of cat pictures, the program “discovers” by itself what are the specific features of a cat and can then distinguish a cat from a dog or any other picture.
This learning technology, based on artificial neural networks, has revolutionized Artificial Intelligence (AI) in the last five years. So much so that today, hundreds of millions of people rely on services powered by deep learning for speech or face recognition, real-time voice translation or video discovery. It is used for example by Siri, Cortana and Google Now.
Data Science Services
The High Performance Data Analytics competency centre aims to assist customers starting their cognitive projects, with recommendations about innovative methods and algorithms for their use cases, recommendations regarding application performance expectation, and to anticipate technological hardware trends.
Competency centre staff includes High Performance Computing engineers, data scientists such as deep-learning and NLP experts.
Customers can leverage the existing competency centre infrastructures, based on proven reference architectures, to test their applications while getting advice on how to make the most out of them. The competency center supports customers during their proof-of-concept stages, relying on trainings , webinars, workshops and dedicated services.
Atos and Deep Learning
The Atos and Bull teams are actively working, together with technological partner NVIDIA®, to provide the Deep Learning expertise, the hardware and software solutions, and the services you need throughout the development stage and the production stage of your Deep Learning project.
BullSequana X1125 blade
BullSequana X1000 – the open supercomputer that is ready to take on the major challenges of the 21st century – supports different types of compute blades, including the GPU-accelerated BullSequana X1125 blade. With this compute blade, BullSequana leverages the NVIDIA Volta architecture, with Tesla® V100 and NVLink, to deliver up to 50x performance boost for applications and drive new possibilities in Deep Learning applications.
Supercomputing and meteo
Without supercomputers, weather forecasting as we know it today would not be possible. And as the computing power available to meteorological agencies increases, weather forecasts improve in many ways.
With supercomputers, climate researchers are able to perform climate simulations at a higher resolution, to include additional processes in earth system models, or to reduce uncertainties in climate projections.
Relying on fine-grain weather forecasts to anticipate severe phenomena
Between 1992, when Météo-France invested in their first supercomputer, and today, the compute capacity increased by a factor of 500.000 – and Météo-France expects to keep the same trend in the future. Weather forecasting agencies worldwide need to:
► issue forecasts every hour;
► use a finer mesh size for finer and more reliable predictions;
► enable the prediction, exact location and time of severe weather phenomena.
These objectives require increased model resolution and the incorporation of a greater quantity of data and observations in the forecasting process. This means more computing resources and the capacity to handle massive data efficiently.
HPC to face the challenge of climate change
Valérie Masson-Delmotte is a French climate scientist and Research Director at the French Alternative Energies and Atomic Energy Commission, where she works in the Climate and Environment Sciences Laboratory (LSCE). She explains here how High Performance Simulation can help face the challenges of climate change.
CNAG: Unravelling the mysteries of DNA
Ivo G. Gut, Director of the Spanish Genomics Center (CNAG) explains how they analyze more than ten human genomes everyday, with the help of Bull supercomputers. Their ultimate purpose is to turn the sequences into valuable insight, so as to improve people’s health and quality of life.
Pirbright Institute: advancing viral research and animal health
When a deadly virus emerges, scientists must respond rapidly to characterize the virus, track its spread, and stop it from devastating livestock and possibly infecting humans. As a global leader in this work, The Pirbright Institute in the UK needs flexible high-performance computing (HPC) resources that can handle a wide variety of workloads.
Pirbright deployed a Bull supercomputer from Atos. With a unified environment running its diverse applications, Pirbright
enhances scientific productivity and helps policymakers respond effectively when a viral outbreak threatens.
Boost the translation of Omics to the clinic environment
To face the demands of an ageing population, the current healthcare system needs to evolve towards a sustainable model focused on patient wellness. This revolution will only be possible by transferring research breakthroughs – from genomic research in particular – to everyday healthcare.
CIPF: Leveraging genomics for better diagnosis and treatment
Genome sequencing and analysis are complex tasks that demand powerful analytics platforms. The compute time needed for sequencing has been reduced considerably in recent years, making it possible to drastically increase the amount of genomic data collected on large study populations. This opens the way to a new genomic-based healthcare service, leveraging in-depth and comprehensive genomic analyses for a predictive and personalized medicine. The challenge is to achieve:
► Better and predictive diagnosis
► More efficient treatments
► Customized dosing
To implement such a promising project, sequence analysis must be available on an industrial scale, and complex analytics must be supported. This requires computational power on an unprecedented scale.