From AI Horror Story to Privacy Fairy Story
Between fiction and reality
Why is it that everyone loves a good horror story, whether on the TV, in a book or the media? We may sit and shudder when we hear one, but it also makes us slightly excited at the same time. We like to listen to horror stories because they are usually unpredictable and bring a level of uncertainty to this often-predictable world in which we live.
Is this why, for many years, people have been scared and skeptical about the use of artificial intelligence (AI)? When we used to think of AI, we thought of rogue killer robots in films like The Terminator and Blade Runner — creating fear that killer robots using AI could turn on humans.
Alternately, could it be because we have read articles about AI becoming too powerful in the future and creating mass unemployment by replacing the human workforce with robots — another potential horror story in the making?
On the flip side of this festival of fright are the actual benefits AI can bring to society. We know that our global issues are extremely complex, and AI can provide us with enhanced tools to significantly amplify human efforts, creating solutions to extremely worrying problems.
The importance of data privacy
According to Forbes, AI powered by deep-learning algorithms is already in use in healthcare. Specifically, AI’s imaging capabilities are promising for cancer identification and screening, including breast cancer. AI is also used to predict the development of diseases across a healthcare network. However, the implementation of AI must remain within the bounds of compliance and data protection law, and organizations must ensure that technology doesn’t interfere with personal data. When planning on using AI technology, the privacy professional’s mantra should be: “Just because you can do that with data doesn’t mean that you should!”
Recently, the UK privacy watchdog, the Information Commissioner’s Office (ICO), ruled that the contract between the NHS and DeepMind failed to comply with data protection law. The ICO said that London’s Royal Free Hospital, which worked with DeepMind, was not transparent about how patient data would be used.
Data protection and privacy laws are essential to slaying the killer robots of the future, because they will restrict the proliferation of rogue AI programs that could be detrimental to humans, countries or organizations — while ensuring the technology is put to proper use. Data Protection Impact Assessments (DPIAs) are used to assess the risks associated with using personal data, and play an integral part in developing new systems that will use new technologies.
When planning on using AI technology, the privacy professional’s mantra should be: “Just because you can do that with data doesn’t mean that you should!”
Recently, the UK privacy watchdog, the Information Commissioner’s Office (ICO), ruled that the contract between the NHS and DeepMind failed to comply with data protection law. The ICO said that London’s Royal Free Hospital, which worked with DeepMind, was not transparent about how patient data would be used.
Data protection and privacy laws are essential to slaying the killer robots of the future, because they will restrict the proliferation of rogue AI programs that could be detrimental to humans, countries or organizations — while ensuring the technology is put to proper use. Data Protection Impact Assessments (DPIAs) are used to assess the risks associated with using personal data, and play an integral part in developing new systems that will use new technologies.
The golden goose
The AI industry is expected to generate over $1 trillion in business value this year and almost $4 trillion by 2022. Any significant doubts over its ethical implications could have significant consequences for organizations considering putting AI to work.
Introducing a DPIA at the beginning of any AI project can mitigate risks and address ethical concerns during the design phase — turning a potential AI horror story into a privacy fairy tale where we all live happily ever after!
About the author

Deborah Dillon
Head of Protection and Privacy, Atos Consulting
Deborah Dillon is Lead Auditor, Business & Platform Solution for Atos UK&I. She specialises in Information Governance, including the application and implementation of Data Protection processes and procedures across a wide range of organisational areas. She has spent many years working in the field of Data privacy in both Private and Public sectors. Having learned key skills in setting up the Privacy and Records Management function across a large UK bank, Deborah has trained teams of Information Champions to undertake key Privacy Work at local level. Deborah came to Atos Consulting and set up the GDPR practice in IGRC and has helped clients with GDPR compliance, from running bespoke GDPR readiness assessments to the management of large scale GDPR programmes. She is a BSI accredited ISO 27001/2 Lead Auditor.
Interested in next publications?
Register to our newsletter and receive a notification when there are new articles.