The politics of AI


Posted on: February 25, 2019 by Will Tanner

The ethics of Artificial Intelligence (AI) is arguably the most interesting debate policymakers are yet to have. It is also one of the most urgent. The pace of algorithmic innovation and scale of deployment mean it is no longer sustainable for businesses to decide ethical dilemmas in isolation. Politicians and regulators must engage.

This is because, after decades of false starts, AI is finally approaching critical mass. The number of global machine learning patents is growing at a compound annual growth rate of 34%, big tech companies spent an estimated $20 billion on AI R&D and acquisitions in 2016, and 95% of business leaders recently told Forbes that they planned to boost spending on AI in the coming year. The next industrial revolution has finally arrived.

With it will come great benefits - this much we know. Autonomous vehicles will not only save thousands of lives but also free up millions of hours for more productive activity. Intelligent homes will cut energy use by orders of magnitude, not increments, as Deepmind has already shown with Google’s servers. New drugs will be discovered and invisible diseases treated by seeing patterns in unimaginably large datasets.

But the ethical challenges are equally profound. If an autonomous car crashes into a driven vehicle - who is liable, driver or computer? If an automated legal decision leads to wrongful imprisonment, who is at fault? If algorithms systemically amplify existing bias, how is prejudice challenged and diversity encouraged? If algorithms are constantly iterating in order to improve, what failsafes can or should we design in to prevent unintended or harmful consequences?

The reality is that the policy and legal frameworks for these quandaries have not been written, leaving companies so far to largely set their own rules. To their credit, many industry leaders recognise the imperative: Elon Musk of Tesla, Demis Hassabis of Deepmind, Jaan Tallinn of Skype, and numerous other big players have all invested time and resources into developing common safety and ethical frameworks for AI. But self-regulation has natural limits in legitimacy.

Here, the Government’s new Office for AI and similar steps by Barack Obama’s White House are welcome steps towards filling the void. Like the growth of internet protocols and systems, if the UK and US can get these questions right, we have an opportunity to set the rules not just for our own markets, but globally. But this will require developing core principles and dynamic protocols that can at least keep up with, and at best stay one step ahead of, rapidly changing technologies. There is no value in regulation that is outdated before it gets on the statute book.

To my mind, this means focusing on transparency, user choice and soft power, rather than blunt rules that will only strangle industry. Tools like sand-boxes, where algorithms are road tested within a safe environment using trial data, could be used to assure products before they get to market. For consumer products, users could be asked to set or approve the principles assumed by autonomous systems, such as how prominently different voices appear on social media or how autonomous vehicles should drive in certain environments, to ensure humans, not lines of code, remain accountable. Public procurement rules could be used to set core standards for technology used in public services without imposing heavy regulations across the entire economy.

Most important, though, is that policymakers start a meaningful public debate about the level of decision-making we are prepared to delegate to machines and in what domains, and which decisions should remain in the hands of humans. As with other political debates, this will require weighing up public opinion against public benefit: AI approaches to healthcare are unlikely to be popular but could save, and extend, many lives. But it is essential if the inevitable growth of AI is to retain public legitimacy and for people to feel they still have power and control over their own lives.

Digital Vision for AI

This article is part of the Atos Digital Vision for AI opinion paper. We explore the realities of AI and what’s ahead for organisations and society, as artificial intelligence advances fast as an enterprise solution.

Share this blog article


About Will Tanner

Director, Onward
Will Tanner advised the Prime Minister Theresa May between 2013 and 2017, as a Special Adviser in the Home Office and as Deputy Head of Policy in 10 Downing Street. He has also previously worked for the leading communications firm, Portland, and for the independent thinktank, Reform.

Follow or contact Will