Tackling fake news without suppressing free speech

Limiting the impact of fake news is easier said than done, since not everyone agrees on what qualifies as such. Drifting away from the attention economy would be a good start.

Over the last few years, fake news has made headline after headline and found its way into our everyday vocabulary. Yet, fake news didn’t start with social media. It’s always been part of the information business. In 1898, as tensions were rising between the U.S. and Spain over the independence of Cuba, President McKinley sent the USS Maine to Havana to put some pressure on the Spanish crown. When the battleship exploded, American newspapers, without any proof, quickly blamed the Spaniards, leading to a declaration of war and an American invasion led by future U.S. President Theodore Roosevelt. It was later discovered that the explosion was actually accidental.

And yet, modern means of communication, new technologies and social media seem to have made fake news proliferate, giving us the feeling that we now live in a post-truth era, a term coined by Ralph Keyes, an American author. What we now define as fake news is actually less blatant disinformation as it is alternative theories that blur the line between what is true and what isn’t. The New York Times and CNN don’t always tell the truth (an obvious example would be the infamous weapons of mass destruction in Iraq), while even a conspiracy theorist like Alex Jones can sometimes say something true. We have to take into account the fact that technology has enabled alternative ideas to be propagated.

We need to talk about social media
These alternative explanations are now able to reach a mainstream audience, benefiting from modern, digital means of communication and information overload. All kinds of theories therefore suddenly become slightly plausible. QAnon is a great example of that. Social media has also helped fake news proliferate, not only by making it easier to share information but also because of algorithmic bias.

As Francesca Musiani, a CNRS sociologist who works on issues surrounding the internet and new technologies, puts it, “Many studies show that this kind of content has a higher chance of being read and shared by users on social media platforms. And these platforms have a business model focused on gathering data from users that they can then monetize with advertisers. Therefore, they want their users to stay on the platform as long as possible, be engaged and entertained. This attention economy ends up naturally favoring fake news."

The effect is increased by the fact that platforms like Facebook and Twitter, which were once mainly tools of socialization, have now become the way for a large part of the population to access information.

Algorithms vs. algorithms
Facing a backlash over fake news, social media platforms have started to react and set up huge teams of moderators to limit the sharing of disinformation. They doubled down on their effort due to public pressure during the U.S. election and its aftermath, which culminated in the assault of the Capitol by Trump supporters.

These efforts haven’t always been well received – some claiming that they didn’t go far enough to stop the proliferation of fake news, while others criticized the threat of censorship from big tech companies, up to conspiracy theorists claiming they were trying to silence everyone who disagrees with their ideas.

“Modern means of communication, new technologies and social media seem to have made fake news proliferate, giving us the feeling that we now live in a post-truth era”

Yet, promising new technologies could soon help us limit the efficiency of fake news while also protecting freedom of speech. I believe that blockchain is one of them. It could be used to verify the provenance of a piece of content (article, picture, film…) and make sure that it has not been altered. The content would be registered on the Blockchain, with a fingerprint created for it, and when someone else reuses that content they can verify that the fingerprint matches the original one, showing that it’s authentic.

Artificial intelligence is another promising area. It includes semantic technology that can detect whether speech is authentic or not – algorithms are starting to be able to tell if a text attributed to a person really comes from him or her, based on how this text is worded as compared to other writings from this person. Also, some neural networks are trained to create fake news in order to be able to spot it.

Built by a team of researchers at the University of Washington, the program Grover follows this pattern. “Given a headline like `Link Found Between Vaccines and Autism,' Grover can generate the rest of the article; humans find these generations to be more trustworthy than human-written disinformation. Developing robust verification techniques against generators like Grover is critical. We find that best current discriminators can classify neural fake news from real, human-written, news with 73% accuracy, assuming access to a moderate level of training data. Counterintuitively, the best defense against Grover turns out to be Grover itself, with 92% accuracy, demonstrating the importance of public release of strong generators,” write the researchers.

Can’t we just build a new internet?
But according to Francesca Musiani, the most efficient way to tackle misinformation would be to challenge the monopoly of big tech by using alternative products that use a different business model. Musiani writes, “Alternatives to centralized internet services do exist, such as PeerTube, a decentralized version of YouTube, or Diaspora, a distributed social network.”

Some consider nonetheless that this kind of solution doesn’t go far enough, that the web as it exists now is broken and cannot be fixed. So, let’s build a new one! To that point, Musiani notes, “Tim Berners Lee is currently working on Solid, a web decentralization project, with the goal of getting rid of the advertising economy that profits from users' data. Louis Pouzin is also seeking to build an entirely new internet infrastructure with his project Rina… There’s a whole dynamic around inventing brand-new models.”

To be able to use these new tools and more easily spot fake news, the average web user will yet have to update his or her skills. “One has to get to know all these technical layers that compose information, which requires to spend some time and energy,” concedes Francesca Musiani. A change that big tech companies could promote by offering online courses and digital training, and that movies and TV shows have started to tackle with programs like Black Mirror, Mr Robot, or Netflix documentaries such as The Great Hack and The Social Dilemma.

Some carefully designed regulations can finally help us move forward. This was one of GDPR’s ambitions in Europe, to limit how social media leverages personal data to target very specific audiences. However, I agree with Francesca Musiani when she warns: “regulations shouldn’t get stuck on a specific set of technologies, but should be able to adapt to technical progress as it evolves over time.”

Share this blog article


About Paul Moore Olmstead
Director of Strategic Business Development for Global Media, Atos and member of the Scientific Community
Paul Moore Olmstead has been working in the area of innovation in the media market for over 15 years. He is based in London, UK and has dual Canadian/Spanish citizenship and degrees in Economics from the University of Toronto and Computer Business Systems at Ryerson University. Previously he spent many years on the BBC Account for Atos where he was responsible for Innovation and Sustainability and before that was the head of Media in Atos Research & Innovation. With over 25 years experience in IT, Paul has worked in wide variety of areas, including public procurement, accounting, mobility, Smart Cities, analytics and media. Paul has worked in such areas as video streaming, 3D, digital preservation, social media, video analytics and recommender systems. He has been collaborating as an external expert for the European Commission for over 10 years and has been a member of the Atos Scientific Community since 2011 where he leads research in the Media area. As well, Paul is responsible for the Media Industry in the Atos Expert Community.

Follow or contact Paul