Privacy policy

Our website uses cookies to give you the most optimal experience online by: measuring our audience, understanding how our webpages are viewed and improving consequently the way our website works, providing you with relevant and personalized marketing content.
You have full control over what you want to activate. You can accept the cookies by clicking on the “Accept all cookies” button or customize your choices by selecting the cookies you want to activate. You can also decline all non-necessary cookies by clicking on the “Decline all cookies” button. Please find more information on our use of cookies and how to withdraw at any time your consent on our privacy policy.

Managing your cookies

Our website uses cookies. You have full control over what you want to activate. You can accept the cookies by clicking on the “Accept all cookies” button or customize your choices by selecting the cookies you want to activate. You can also decline all non-necessary cookies by clicking on the “Decline all cookies” button.

Necessary cookies

These are essential for the user navigation and allow to give access to certain functionalities such as secured zones accesses. Without these cookies, it won’t be possible to provide the service.
Matomo on premise

Marketing cookies

These cookies are used to deliver advertisements more relevant for you, limit the number of times you see an advertisement; help measure the effectiveness of the advertising campaign; and understand people’s behavior after they view an advertisement.
Adobe Privacy policy | Marketo Privacy Policy | Pardot Privacy Policy | Oktopost Privacy Policy | MRP Privacy Policy | AccountInsight Privacy Policy | Triblio Privacy Policy

Social media cookies

These cookies are used to measure the effectiveness of social media campaigns.
LinkedIn Policy

Our website uses cookies to give you the most optimal experience online by: measuring our audience, understanding how our webpages are viewed and improving consequently the way our website works, providing you with relevant and personalized marketing content. You can also decline all non-necessary cookies by clicking on the “Decline all cookies” button. Please find more information on our use of cookies and how to withdraw at any time your consent on our privacy policy.

Skip to main content

Is GenAI biased? How to stop algorithms from reproducing human errors and prejudices

Algorithms are only as good as the data that is used to train them. Unfortunately, because these huge datasets are built by humans, they often reflect human biases, meaning that it is all too easy to build an artificial intelligence that is plagued by sexism, racism or other kinds of biases. As these systems are being used by a growing number of actors — including public institutions — it becomes all the more important to monitor these algorithms and make sure that they are bias-free.

The question of whether generative artificial intelligence (GenAI) is biased is based on the fear that algorithms built mostly by men using datasets that represent only a fraction of humanity could be biased against women, non-Western cultures or minorities.

The trouble with generative AI  

While this worry isn’t new, it has become even more pressing with the rise of generative AI (GenAI). On the one hand, it makes it even harder to spot biases in the data used to train the models. GenAI relies on transformers that are extremely complex and pre-trained, in a process involving billions of different parameters dealing with massive datasets. Furthermore, GenAI involves the participation of users, who are invited to input prompts to conversational agents that then elaborate based on those. As a consequence, monitoring biases involves not only ensuring that the datasets correctly reflect the population that will be using the AI agent, but also to take into account the way users will interact with it. A tool like ChatGPT serves about 100 million users a week, for a total of 1.6 billion users in 2024. These massive figures are increasing complexity and making fairness and bias even harder to assess.

On the other hand, generative AI makes biases in the dataset even more problematic since it directly generates content. For example, a traditional AI may provide a description of an image, while generative AI will directly create an image based on a prompt from the user.

The all-too real risks of biased AI 

The risks of AI bias go far beyond an image-generating tool automatically representing a man when asked to draw a software engineer or depicting wig-wearing black men when prompted to offer a portrait of the Founding Fathers. AI is currently being tested by most institutions, including in sensitive fields such as healthcare, the courts and law enforcement.

Research has shown that the use of predictive police algorithms trained on biased data could lead police forces to unfairly target African Americans, and that algorithms trained to spot melanoma had a higher chance of missing diagnoses on black patients, since they had been trained mostly on pictures of light-skinned individuals. Seatbelts, headrests and airbags in cars which have been designed mainly based on data collected from car crash dummy tests using physiques and seating positions of men can lead women to suffer higher levels of injury and higher death rates in similar accidents.

Biased AI can therefore have real, potentially life-threatening consequences. It is important to note that every AI failure isn’t necessarily due to a discrimination, but often to technical limitations. To take the example of melanoma, analyzing the image of a dark skin is a well-documented technical challenge. Police algorithms targeting African Americans, on the contrary, indeed suffered from a racist bias in their training, as they were trained on datasets using exclusively pictures of African Americans. 

 

Monitoring AI bias requires ensuring that the datasets correctly reflect the population using the AI agent, as well as taking into account how users will interact with it.

Dealing with black boxes 

One difficulty with assessing the potential biases of an AI system is that most of them are black boxes. We’ve already mentioned that in order to train an AI algorithm, one needs a huge volume of data. It is first gathered in unstructured form from the internet and documents, then labeled and organized in order to be used for training. This takes a lot of time and costs a fortune. It is also worth mentioning that AI is currently a fast-growing, highly competitive field where a handful of companies are fiercely competing to hire the best engineers and offer the best products. 

As a consequence, once a tech company has finally built a dataset large enough to train its latest AI model, it tends to keep it private — out of fear of revealing its secret sauce to the competition. It is therefore not only difficult to check the data that has been used to train the algorithms, but also the balance, i.e. the weight that has been given to each data over the training process.

One way to tackle the black box problem is to focus efforts on the explainability of an algorithm, or the capacity of an AI system to explain how it came to a certain decision and using which parameters. Explainability enables regulators and independent observers to monitor these systems and make sure that they are fair. The EU’s new AI Act will now constrain high risk AI systems (including those used by the police, the judicial system and healthcare facilities) to ensure the transparency and the explainability of their algorithms.

While much is at stake when the algorithm is being conceived, the dataset selected and the weight of each parameter decided, it certainly doesn’t stop as soon as the algorithm is released on the market. It is absolutely necessary to keep monitoring these algorithms after they have been released, since one of the characteristics of AI is that it learns every time it is retrained. Algorithms will thus keep changing and improving as they are being put at work and improved by their creators. If one doesn’t keep an eye on these models after are released for use, they risk acquiring new biases as they keep living their lives.

Why biases are only one piece of the AI puzzle 

It is also important to note that AI ethics doesn’t only involve the way algorithms are built and trained. There’s also the behavior that the engineers decide to provide the AI with. In generative AI, there’s already the “3 H rule — which stands for helpful, harmless and honest. It means, for example, that ChatGPT simply isn’t allowed to tell you how to make homemade explosives, to lie to you or to help you harm another person. While not perfect — some users have found creative ways to circumvent these rules, for example asking ChatGPT to give them a list of illegal torrent sites by claiming they wanted to avoid these sites at all cost to avoid breaking the law — these behavioral rules are a second security layer that helps fight against biases in AI and promote ethical use.

Nor are biases the only risks at bay when AI is involved. A fake robocall purporting to be from President Joe Biden during the U.S. presidential campaign recently highlighted the risk of fake AI-generated content, which can include fake pictures, audio recordings and videos. These deepfakes have been supercharged by the rise of generative AI. As about 60 different major elections will be held this year worldwide, the risk of misinformation and manipulation is quite high. Several reports have highlighted how cybercriminals are using generative AI to make their attacks more efficient.

Finally, there’s also a risk that AI bias could increase inequalities, whether on the international stage, between rich and developing countries, but also within every country, between the upper class and the working class, or the tech-savvy younger generations and older ones. 

AI ethics don’t stop at biases, but it is an issue that is starting to be tackled by several regulations, including the AI Act. Nevertheless, it will require constant monitoring and efforts to improve safety.

Posted on: June 25, 2024

Share this blog article