Ethics by design: Taking responsibility in the age of machines
Autonomous systems are advancing by the day. Physical robots (such as driverless cars) and pure software (such as diagnostic systems, predictive systems and personal assistants) now leverage advanced algorithms, machine learning and Artificial Intelligence (AI). As they gain capabilities, we trust them to make more decisions on our behalf, increasingly without any human intervention.
But machines aren’t humans. They lack something fundamental to our makeup – a conscience.
As human beings, we take our conscience influences our decision-making. We work to ensure our decisions respect Human Rights, comply with regulations, avoid risk and more.
When a machine makes a decision on our behalf, it can’t judge whether that decision is ethically good or bad. This can lead to unintended consequences.
The potential for bias
Bias is probably the most well-known unintended consequence. When Amazon’s facial recognition system misidentified numerous US Senators as criminals, for instance, racial bias was thought to be the cause.
Amazon also recently scrapped its AI recruiting tool because of bias – this time toward men. Amazon trained the tool, which reviewed applicants’ CVs, to learn patterns in CVs submitted over the previous decade. The tool picked up on bias in the data reflecting the industry’s male dominance.
There are a couple of different things that could be behind the bias: the data and the methods. Any bias in the data used to train the algorithms is amplified in the machine’s decision-making. “Bias in – bias out,” as we say. But human beings select the data; machines cannot judge whether that data will lead to a biased outcome.
Bias can also come from the autonomous system design, which inevitably embeds bias. Designs are never completely ethically neutral. The methods developed will include that bias, which will be revealed in the machine’s results.
From privacy to ethics
Ethics is climbing the agenda. You only have to look at the Montreal Declaration for a Responsible Development of AI, the AI principles developed by the Future of Life Institute and the World Economic Forum's AI Board Toolkit, along with the OpenAI, IEEE and EU discussions on autonomous systems and ethics. In an enterprise context, ethical questions are playing an increasingly strategic role in the governance and interplay of social and sustainable responsibilities. But all too often questions focus on privacy.
Gartner has included ‘digital ethics and privacy’ as a strategic trend for 2019, as TechCrunch reveals, writing: “Shifting from privacy to ethics moves the conversation beyond ‘are we compliant’ toward ‘are we doing the right thing.’” This change in direction may reflect a growing understanding of a second potential unintended consequence of autonomous systems: longer-term changes in human behavior.
Algorithms scoring our every move bring with them unintended consequences: we risk building a culture of conformity, risk aversion and social rigidity, as Social Cooling describes. On top of that, this scoring also enables new surveillance mechanisms: the New York Times highlights how China is using its thriving technology industry to identify and track 1.4 billion people.
For Atos, the ethics challenge is much broader than privacy. It’s about us, as human beings, taking the final responsibility for the outcomes of the decisions made by our machines. It’s about us not hiding behind the technology.
It also comprises various dimension: technological, managerial, regulatory and methodological.
The regulatory dimension is largely down to governments, with the pace dictated in part by public awareness and – potentially – a public backlash. Regulations addressing ethics will develop differently and at different speeds across the diverse regions of the world, in the same way privacy regulations have.
In the EU, for instance, advisory groups and regulatory boards are developing guidelines and best practices to address the ethics challenge. But regulations will take time to emerge – and organizations need to act now.
An ‘ethics by design’ approach
There are, however, practical steps organizations can take to address the other three dimensions in an approach we call ‘ethics by design.’ An extension of the ‘privacy by design’ approach, the ‘ethics by design’ approach combines ‘Methodology + Tooling + Organizational change.’
The methodology aspect of ‘ethics by design’ embeds an ICT ethics review process into the data science methodologies, potentially through an independent ethics advisory board. Tooling then helps ensure that the data used for training autonomous machines are free from bias and that humans can easily understand the algorithms used by machines. A system of checks can be put in place where humans check for machine bias as we as machines helping to detect human bias.
When it comes to organizational change, the ‘ethics by design’ approach requires top management buy-in and appropriately empowered internal bodies – most likely evolving the bodies currently responsible for addressing the privacy challenge. After all, ethical considerations embody all levels of the organization: from board commitment to the ethical behavior of developers, mixing everything from culture to its business models.
Here at Atos, we are already building those tools and methodologies. We want to offer our clients the capabilities and best practices needed for adopting the ‘ethics by design’ approach.
Ethics will be crucial to the success of autonomous systems, and the ethics challenge will only grow more prevalent as machines advance and awareness about their capabilities grows. Organizations across all industries – both private and public – need to act today. After all, enforcing human values onto machines will only become more complex the more machines are embedded in our lives!
This blog has been edited by the Atos Scientific Community members of the Ethics of Autonomous Systems Track: Celestino Güemes, Claire Le Floch, Kai Geese and Olivier Maas.