Our website uses cookies to give you the most optimal experience online by: measuring our audience, understanding how our webpages are viewed and improving consequently the way our website works, providing you with relevant and personalized marketing content.
You have full control over what you want to activate. You can accept the cookies by clicking on the “Accept all cookies” button or customize your choices by selecting the cookies you want to activate. You can also decline all non-necessary cookies by clicking on the “Decline all cookies” button. Please find more information on our use of cookies and how to withdraw at any time your consent on our privacy policy.

Managing your cookies

Our website uses cookies. You have full control over what you want to activate. You can accept the cookies by clicking on the “Accept all cookies” button or customize your choices by selecting the cookies you want to activate. You can also decline all non-necessary cookies by clicking on the “Decline all cookies” button.

Necessary cookies

These are essential for the user navigation and allow to give access to certain functionalities such as secured zones accesses. Without these cookies, it won’t be possible to provide the service.
Matomo on premise

Marketing cookies

These cookies are used to deliver advertisements more relevant for you, limit the number of times you see an advertisement; help measure the effectiveness of the advertising campaign; and understand people’s behavior after they view an advertisement.
Adobe Privacy policy | Marketo Privacy Policy | MRP Privacy Policy | AccountInsight Privacy Policy | Triblio Privacy Policy

Social media cookies

These cookies are used to measure the effectiveness of social media campaigns.
LinkedIn Policy

Our website uses cookies to give you the most optimal experience online by: measuring our audience, understanding how our webpages are viewed and improving consequently the way our website works, providing you with relevant and personalized marketing content. You can also decline all non-necessary cookies by clicking on the “Decline all cookies” button. Please find more information on our use of cookies and how to withdraw at any time your consent on our privacy policy.

Skip to main content

AI Explainability: making the complex comprehensible

José Esteban Lauzan

Head of Innovation at Atos Iberia, founding member of the Atos Scientific Community, Atos Distinguished Expert

Amélie Groud

Senior Data Scientist and member of the Scientific Community

Posted on: 21 April 2020

This article is part of Atos Digital Vision: Ethics opinion paper which explores how embedding ethical reflection into the design of digital technologies can lead to genuine benefits for customers and citizens by helping to address their legitimate concerns about their wider impact, today and into the future.

AI Explainability is the name given to the approaches, techniques and efforts that aim to make Artificial Intelligence (AI) algorithms explainable to humans.

In the case of some AI algorithms, especially machine learning (ML) ones, the result of an AI solution cannot be understood by a human expert in a particular subject matter and the designer of the solution cannot explain why the AI arrived at that specific result. Lack of explainability raises concerns around safety, ethics, fairness, reliability and ultimately trust in the proposed solution.

AI Explainability is complicated. ML algorithms aim to detect patterns and hence insight from input data, but this process cannot be comprehended by simply listing rules or instructions in human-readable format. The machine learning process also cannot be understood by comparing it to a human learning process. An ML model can integrate thousands of dimensions in its learning process whereas a human being can barely work with more than a handful simultaneously. ML algorithms usually require a large amount of input data whereas humans only need a few examples to start making accurate decisions.

Responsibility for automated decision-making

To make things more complicated, it turns out that we apply different standards to humans and familiar algorithms (such as rule-based ones) than we apply to more innovative algorithms such as ML ones.

Can technology help with rational decisions?

Humans are known to display bias in judgement and decision-making. Studies have shown that in hiring processes, for example, if photos and certain demographic information are removed from application forms, people often arrive at different selection decisions. Provided they are well designed with ethical considerations built in from the outset, digital applications could in principle vet applications with greater impartiality.

Nevertheless, humans consider themselves explainable because most of us can articulate why we took a particular decision. And there is an incentive to be able to explain – because humans can be held legally responsible for the consequences of their decisions.

By contrast, algorithms are not (yet) responsible for their decisions, which means that determining liability in automated decision-making is still an open legal question. Because of this gap in liability and because many AI algorithms are new to humans and business applications, there is a natural lack of trust in them and a strong desire for AI Explainability.

Making a tangible difference to citizens and society

Achieving AI Explainability requires understanding and insights aligned to both the socio-economic and scientific-technical dimensions.

Societies will probably progressively trust AI algorithms as their use becomes more widespread and as legal frameworks refine the allocation of liabilities. Of course, cultural differences greatly affect how countries and regulatory regions approach AI. In countries such as China, regulation is lax and the political system seemingly places little importance on the freedom of individuals; for example, China is implementing a social credit system, based on algorithms, which aims to provide a standardized assessment of the trustworthiness of its citizens.

This context makes Ethical AI and Explainable AI, as we see it in Europe, less applicable. In the US, while the rights of individuals are more important, regulation is also lax (especially for business purposes), so the workability and benefits of AI solutions represent greater value than their explainability. In other regulatory areas like the EU, emphasis is placed both on individual rights and regulation (such as the General Data Protection Regulation), with the result that explainability is often more important than workability — especially in heavily regulated sectors, such as energy and finance.

From a scientific and technical perspective, methods and techniques are being researched and developed with the objective of increasing the interpretability of algorithms. Some of these methods are model-agnostic and can provide meaningful insights from any trained ML models (e.g. Shapley values — a method for interpreting such models which originates in cooperative game theory and assigns payouts to players according to their contribution to the total payout). Other techniques are specific to a given family of algorithms.

Depending on the use-case, client, sector, market, regulation, political environment, culture, etc., using a specific AI-ML model may or may not make sense, be legal, or ethical. We should identify the nuances and highlight the need for case-by-case analysis and decision-making. In the case of healthcare, finance or energy, most scenarios are heavily regulated (which means more explainability is needed). In other scenarios we might favor benefits over explainability — for example, using an AI-ML model that monitors the manufacturing of non-critical parts and flags up when problems are likely to appear in the process.

What all these scenarios have in common is a need for focused consideration of the level of transparency that is required and how it can be achieved.

For more information and to read other experts’ insights on the topic:

Download Atos Digital Vision: Ethics

Share this blog article


About Jose Esteban Lauzán
Head of Innovation at Atos Iberia, founding member of the Atos Scientific Community, Atos Distinguished Expert and member of the Scientific Community
José is the Head of Innovation at Atos Iberia, Editor-in-Chief of Journey 2020, founding member of the Atos Scientific Community and Atos Distinguished Expert. He is passionate about Innovation and how it can transform business. Jose leads innovation activities in Spain & Portugal, including Innovation Workshops, pilots, proofs of concept and events or the Employee Start-Ups initiative. He leads the Systems & Solutions domain at global level. He also collaborates with the Executive Committee on corporate initiatives, with R&D and Markets in the transfer of successful results to the Atos portfolio, and with Legal departments in IP-related initiatives. José started his career as a researcher at university (simulation, DSP) and innovator at the Spanish Medicines Agency (leading its transformation into an electronic organization). He joined Atos in 2000, as chief engineer, manager of R&D teams and coordinator of large international projects in eHealth, natural risk management, video and human language technologies.

Follow or contact Jose


About Amélie Groud
Senior Data Scientist and member of the Scientific Community and member of the Scientific Community
Senior Data Scientist Amélie Groud at Atos North America is responsible for designing artificial intelligent (AI) solutions for enterprise customers through the implementation of advanced analytics, big data tools and machine learning techniques. She translates emerging data technologies into tangible deliverable IT services for various vertical applications, including industry, defense and aerospace. Groud also serves as a member of the Atos Scientific Community, a global network comprised of 150 of the top scientists, engineers and forward thinkers from across the Group. She earned her master’s degree in computer science with an emphasis in machine learning from the University of Technology of Compiègne in France.

Follow or contact Amélie