Is ChatGPT your biggest fan? Avoiding confirmation bias in AI interactions
The topic of bias in generative AI has garnered significant attention, and rightly so. My colleague, Samia Jnini recently wrote a very interesting article on the topic, focusing on the harm that biased AI can have on certain groups. This is a great example of what we call “bias in development,” which occurs due to factors like training data, labeling practices and algorithmic structures.
However, there's another form — “bias in use” — over which we have much more control. Recently, I have noticed that the more I use ChatGPT instead of Google for searching and conducting research, the more I see confirmation bias creeping in.
Confirmation bias is the tendency to seek out, interpret and remember information in ways that confirm our pre-existing beliefs while choosing to ignore contradictory evidence. It's a universal human trait, though we might not readily admit it. Generative AI (GenAI) can amplify this bias, acting like an additional voice in our heads that not only praises us for being "right" but also supplies us with supporting evidence.
This isn’t a flaw of the GenAI itself but a result of how we interact with it. As users, we can sometimes steer the conversation to echo our existing beliefs and overlook counterarguments. Recognizing and mitigating this bias is crucial for getting the most out of GenAI. Below, I have outlined some strategies to avoid bias and improve how you interact with generative AI tools.
Bias isn't always a flaw of the GenAI itself, but a result of how we interact with it. We sometimes steer the conversation to echo our existing beliefs, but there are strategies to avoid bias when interacting with generative AI tools.
Five strategies to avoid biased AI results
1. Avoid leading the witness
Be careful not to guide the AI to a particular answer and pay close attention to how you are phrasing your queries. The most significant new capability that GenAI brings is its ability to understand the user’s intent. In other words, it’s intelligent enough to understand what you’re asking and what sort of answer you expect to receive.
For example, if you ask it “Tell me why Diego Maradona was a better footballer than Lionel Messi,” it will not contradict you. Its goal is to provide information that satisfies the parameters of your query, so it will try to provide data that supports your implied conclusion and may ignore evidence to the contrary.
This sort of questioning is, let’s say, questionable in many courts of law, and it’s certainly not going to help you arrive at an impartial, fact-based conclusion with AI.
2. Ask open-ended questions
When writing an AI query, it’s important to allow room for multiple perspectives. If you’re looking for real insight, don’t ask a simple “yes” or “no” question. Give the AI the context it needs in order to provide relevant responses, but leave it room to be creative. It has been trained on an immense universe of information, so don’t box it in. You can always narrow down the results with a more probing follow-up question that will get you closer to the insight you’re seeking.
3. Use neutral language
Equally as important as not leading the witness is framing your questions neutrally so that they avoid suggesting a specific outcome. Large language models (LLMs) have been trained so well that that they are highly sensitive to the subtle biases on display when we speak and write.
This can be especially important for people using AI in a language that employs gendered nouns, like French, Spanish, German, Italian and others. For example, let’s say you ask ChatGPT in German, “When looking for a Python developer, what would his ideal qualifications be?”
In this context, the AI might understandably assume that you are only looking for male candidates and skew its response based on that assumption. Better to use a gender-neutral term like “their” — or better yet, phrase the query in a way that avoids a pronoun altogether.
The same goes for more subtle examples like loaded or coded language. Some words (like regime vs. government, eradicate vs. remove, superior vs. better, etc.) have embedded emotional or political connotations that may steer you towards an undesirable or biased response from the AI.
4. Seek out diverse sources
One way to increase the confidence in the information that GenAI provides is to request information from a range of different viewpoints. For example, you can specify what type of sources to consider in your initial query. Try adding a phrase like “include sources such as academic articles, trade publications and analyst reports” to the end of your question.
Alternately, you can construct a follow-up question such as “Which of these opinions would be more likely to be expressed by a practitioner versus a researcher?” In the end, GenAI is great at finding a consensus in its responses, so if you want to ensure diversity, you may need to ask for it.
5. Look for contradictory evidence
Finally, a great practice for using GenAI to sharpen your understanding of a topic is to deliberately ask for opposing arguments. As explained earlier, generative AI is great at understanding intent and will quickly find a wealth of information that supports your initial assumptions. Once you have arrived at a likely answer, feed it back to the AI and ask it to poke holes in your argument. You may be surprised at how much you learn.
Generative AI is a powerful tool, but it can consciously or unconsciously be misled by us, the users. The strategies above will help ensure fairer, more accurate and less biased results from AI, so use them wisely.
When it comes to using GenAI to help advance your work, the best piece of advice I can offer is this: Don’t be too hard yourself. We are human after all.
Posted on: July 9, 2024
Adil Tahiri
Head of CTO and Client Advisory GroupMember, Atos Research Community
View detailsof Adil Tahiri>