The value of a diverse workforce in inclusive AI development
Head of Accessibility & Digital Inclusion and member of the Scientific Community
Global Chief Diversity Officer
Posted on: 7 May 2020
This article is part of Atos Digital Vision: Ethics opinion paper which explores how embedding ethical reflection into the design of digital technologies can lead to genuine benefits for customers and citizens by helping to address their legitimate concerns about their wider impact, today and into the future.
Given the critical nature of data in the legal, education, finance and transportation spaces, enormous damage could result in the blink of an eye if artificial intelligence (AI) systems are not carefully constructed to eliminate the opportunity for the data to become tainted by discrimination.
The data sets we are using to train AI may also be reflective of pre-existing societal bias, and applied algorithms may amplify them. We must determine and function within the “boundary of acceptability” when developing algorithms so that technology doesn’t go rogue and create a situation similar to recruiting tools that were reported in the media to inadvertently be biased against women. Those systems had been trained to rate applicants by observing patterns in data from resumes submitted over a 10-year period, most of which came from men. The tool is no longer used.
Atos does not discriminate on the basis of race, religion, color, gender, age, disability, sexual orientation, or any distinctive traits, and we cannot allow bad AI to negatively influence our recruiting practices. We are taking this to the next level by applying the concept of ‘Design for Good’ championed by our Scientific Community member John Hall. Design for Good aims to prioritize the design of responsible digital applications. We are aiming for this moral compass — the Design for Good mentality — to be part and parcel of every decision made in the AI arena as well as for other digital technologies.
Accessibility and disability factors
Corporations the world over tend to leave people with disabilities out of the Design for Good phase, and rarely include them in their composition in BETA software testing or on Ethics boards. In fact, Ethics boards rarely include people who are affected by technology. It is critical to include people with disabilities at all stages of the development lifecycle, because it’s one thing to observe bias and recognize it, but it’s a completely different thing to really understand it from a personal perspective and to have experienced it yourself throughout your life.
AI-driven technologies also hold great potential for solving the challenges faced by people with disabilities. For example, the accessibility of our media rich, hyper-connected world is being improved by algorithms which deliver automatic subtitle captions and audio image descriptions to include people who are deaf or blind.
Creating AI solutions requires more than diversity. Team members need to feel they are contributing, that their opinions are valued, and that all perspectives and suggestions are taken seriously for the Design for Good approach to produce unbiased results in reality.
Key take aways for action now!
• Teams to be composed of a variety of people from all walks of life to allow for innovative thinking.
• Put in place checks and balances at all stages of the development lifecycle to ensure that employee initiatives are inclusive, not exclusive.
• Ethics boards to be representative of society to avoid “groupthink”.
• Examine data for gaps and pre-existing biases.
For more information and to read other experts’ insights on the topic, download Atos Digital Vision: Ethics