Digital Society: How should we govern the online world?


Posted on: July 20, 2017 by Christopher Joynson

In February 2017, executives from Twitter, Google and Facebook came before the Home Affairs Select Committee of MPs and were criticised for their failure to censor content.

I couldn't help but sense the irony here.

Here are a group of MPs serving an electorate of circa 65 million people, lecturing executives of supranational organisations serving just under 3 billion people (combined).

The rhetoric from the MPs was strong and critical, but in this digital society it is becoming clear that the balance of power was sitting on the other side of the room.

A world with no borders

I think of social media as being the equivalent of putting everyone, internet users from right across the world, into a great big room and seeing what would happen. Everyone starts talking and sharing information, opinions and ideas with each other - some louder than others, some are more popular than others.

What happens? Well we see the best and the worst sides of our society. We see acts of kindness and love, acts of boredom and procrastination; but we also see people lying and claiming that they have authority (fake news), people saying horrible things to other people (trolling), and people sharing disturbing content.

In this environment, where different cultures, characters and perspectives come together, there must be some rules as to what is right and what is wrong.

In a room where there are no physical borders between people, and no formal hierarchy, who has the right to make these rules?

The answer, in the current state of social media, is the executives at YouTube, Twitter and Facebook. The people that built the room and manage it daily. They create the rule book; they decide the censorship policy.

 Freedom of expression; an ambiguous concept at the heart of the problem

In June 2016, the European Commission, in conjunction with Facebook, Microsoft, Twitter and YouTube, published a code of conduct on countering illegal hate speech online.

The first introductory paragraph states the following;

“The IT Companies share… a collective responsibility and pride in promoting and facilitating freedom of expression throughout the online world”

What does ‘freedom of expression’ (or speech) actually mean?

The code goes on to reference a ruling made by the European Court of Human Rights in the case of Handyside v. the United Kingdom in 1976.

In it, the judge stated that freedom of expression “is applicable not only to information or ideas that are favourably received or regarded as inoffensive or as a matter of indifference, but also to those that offend, shock or disturb the State or any sector of the population”.

It is hard to think of a broader definition for what this concept entails, and one that is more open to interpretation.

And yet, the second point in the code of conduct states that;

“(2) Upon receipt of a valid removal notification (for content on their platform), the IT Companies are to review such requests against their rules and community guidelines and where necessary national laws transposing the Framework Decision 2008/913/JHA (referencing laws against racism and xenophobia), with dedicated teams reviewing requests.” (Highlighting and brackets added)

When did the EU, and any of our democratic legislatures for that matter, decide that precedence should be given to the “rules and community guidelines” of private companies in interpreting what does or does not come under the scope of freedom of expression?

The IT companies have been given the autonomy to allow conduct that “offends, shocks or disturbs the State or any sector of the population” on their platforms.

The reference to transposing national laws helps to ensure that the black and white decisions are made properly, where conduct clearly offends against national law, but it is the grey areas that have been given insufficient clarity in the guidance of our legislatures to date.

“No matter where you draw the line there are always going to be some grey areas”

With this in mind, I found the recent leak of the Facebook censorship guidelines fascinating:

  • Videos depicting self-harm are allowed, as long as there exists an “opportunity to help the person”.
  • Videos of child and animal abuse (as long as it is non-sexual) can remain in an effort to raise awareness and possibly help those affected.
  • Violent threats are “most often not credible”, until specific statements make it clear that the threat is “no longer simply an expression of emotion but a transition to a plot or design”.

The response to the leaks from Monika Bickert, Facebook’s head of global policy management, was telling, and hints at the challenge we face here;

“Facebook is not a traditional technology company. It’s not a traditional media company. We build technology, and we feel responsible for how it is used".

“We have a really diverse global community and people are going to have very different ideas about what is okay to share. No matter where you draw the line there are always going to be some grey areas”.

These "grey areas" relate to controversial topics, crossing legal, cultural and moral boundaries. Yet the censorship policy that governs them is not held in the manifesto of a democratically elected party, it is designed and implemented by one of the world’s largest international private sector organisations.

Is this right? Is this ethical and democratic? It seems hard to substantiate, but what is the alternative? Should we be conducting elections for the next CEO of Facebook? Maybe we should.

I feel that democracy should prevail here. The censorship policies should be designed, developed and owned by a democratically elected body, we should have a say in how our online lives are governed.

One answer is for a global body, such as the EU, the UN or some alliance of democracies to produce legislation that more closely regulates conduct in the online world, according to shared principles.

Social reporting; a sustainable model for enforcement?

Not only the rules themselves cause ethical problems, but their enforcement causes logistical problems also.

Social media organisations commonly adopt a ‘social reporting’ approach to content censorship – they invite their users to report specific content that violates the community rules and guidelines, so that moderators can take the content down having assessed the circumstances objectively.

Facebook are increasing the size of their moderation force from 4,500 to 7,000, managing content for 1.59 billion people. With the larger team size, that is 227,142 Facebook users per moderator. To put that into context, there are 207,140 police workers in the UK, which equates to 314 UK citizens per police worker.

Even with support from the Facebook community - this imbalance in moderators per user surely cannot be considered sustainable.

There is also a more fundamental flaw in the concept of social reporting – a post can only be considered to violate the terms of use once it has offended someone and has been reported – there is no pre-emptive approach to censoring content.

Indeed, what about the malicious content that never gets reported – that reaches the screens of likeminded people and succeeds in cultivating further hatred and animosity?

The solution? It may seem like a scary concept, but an artificial intelligence that can regulate content according to a predetermined rule book may be the way forward. Such a solution could;

  • Recognise black-listed words, images and sentiment before being shared with the public,
  • Validate reporting of content by assessing user behaviour, both immediate and historical on that social medial platform (and even on other websites and forms of social media as well) – building a richer picture of the user’s online footprint
  • Learn from experience, building an exponentially more mature and accurate view of what does and does not amount to violation of content guidelines.

It would enable complete objectivity, fairness, and enforcement.

In Conclusion

How we govern the online world has become a political and public concern.

Going forward we have a decision to make; do we encourage freedom of expression and open the internet up to diversity of opinion, or do we adopt a protectionist approach – sheltering people away from views and expressions that they disagree with, or find to be insulting?

Whichever route we take, or however we plot a course through the middle, I feel that society should have some say in the matter.

If we take democratic control of our online governance policies and adopt a comprehensive approach to their administration, then we will go some way toward a sustainable model for digital governance of the online world.

Share this blog article


About Christopher Joynson

Digital Transformation Consultant and member of the Scientific Community
Christopher is a Digital Transformation Consultant with experience of strategic projects across financial services and central government. As a member of the Scientific Community, Christopher brings his background in law and a fresh innovative mindset to offer holistic perspectives on the implications of technological change for our society. His main area of interest is the digital divide, and how we can ensure that sections of our society are not left behind by the digital world.

Follow or contact Christopher