Our website uses cookies to give you the most optimal experience online by: measuring our audience, understanding how our webpages are viewed and improving consequently the way our website works, providing you with relevant and personalized marketing content.
You have full control over what you want to activate. You can accept the cookies by clicking on the “Accept all cookies” button or customize your choices by selecting the cookies you want to activate. You can also decline all non-necessary cookies by clicking on the “Decline all cookies” button. Please find more information on our use of cookies and how to withdraw at any time your consent on our privacy policy.

Managing your cookies

Our website uses cookies. You have full control over what you want to activate. You can accept the cookies by clicking on the “Accept all cookies” button or customize your choices by selecting the cookies you want to activate. You can also decline all non-necessary cookies by clicking on the “Decline all cookies” button.

Necessary cookies

These are essential for the user navigation and allow to give access to certain functionalities such as secured zones accesses. Without these cookies, it won’t be possible to provide the service.
Matomo on premise

Marketing cookies

These cookies are used to deliver advertisements more relevant for you, limit the number of times you see an advertisement; help measure the effectiveness of the advertising campaign; and understand people’s behavior after they view an advertisement.
Adobe Privacy policy | Marketo Privacy Policy | MRP Privacy Policy | AccountInsight Privacy Policy | Triblio Privacy Policy

Social media cookies

These cookies are used to measure the effectiveness of social media campaigns.
LinkedIn Policy

Our website uses cookies to give you the most optimal experience online by: measuring our audience, understanding how our webpages are viewed and improving consequently the way our website works, providing you with relevant and personalized marketing content. You can also decline all non-necessary cookies by clicking on the “Decline all cookies” button. Please find more information on our use of cookies and how to withdraw at any time your consent on our privacy policy.

Skip to main content

From AI Horror Story to Privacy Fairy Story

Between fiction and reality

Why is it that everyone loves a good horror story, whether on the TV, in a book or the media? We may sit and shudder when we hear one, but it also makes us slightly excited at the same time. We like to listen to horror stories because they are usually unpredictable and bring a level of uncertainty to this often-predictable world in which we live.

Is this why, for many years, people have been scared and skeptical about the use of artificial intelligence (AI)? When we used to think of AI, we thought of rogue killer robots in films like The Terminator and Blade Runner — creating fear that killer robots using AI could turn on humans.

Alternately, could it be because we have read articles about AI becoming too powerful in the future and creating mass unemployment by replacing the human workforce with robots — another potential horror story in the making?

On the flip side of this festival of fright are the actual benefits AI can bring to society. We know that our global issues are extremely complex, and AI can provide us with enhanced tools to significantly amplify human efforts, creating solutions to extremely worrying problems.

The importance of data privacy

According to Forbes, AI powered by deep-learning algorithms is already in use in healthcare. Specifically, AI’s imaging capabilities are promising for cancer identification and screening, including breast cancer. AI is also used to predict the development of diseases across a healthcare network. However, the implementation of AI must remain within the bounds of compliance and data protection law, and organizations must ensure that technology doesn’t interfere with personal data. When planning on using AI technology, the privacy professional’s mantra should be: “Just because you can do that with data doesn’t mean that you should!”

Recently, the UK privacy watchdog, the Information Commissioner’s Office (ICO), ruled that the contract between the NHS and DeepMind failed to comply with data protection law. The ICO said that London’s Royal Free Hospital, which worked with DeepMind, was not transparent about how patient data would be used.

Data protection and privacy laws are essential to slaying the killer robots of the future, because they will restrict the proliferation of rogue AI programs that could be detrimental to humans, countries or organizations — while ensuring the technology is put to proper use. Data Protection Impact Assessments (DPIAs) are used to assess the risks associated with using personal data, and play an integral part in developing new systems that will use new technologies.

When planning on using AI technology, the privacy professional’s mantra should be: “Just because you can do that with data doesn’t mean that you should!”

Recently, the UK privacy watchdog, the Information Commissioner’s Office (ICO), ruled that the contract between the NHS and DeepMind failed to comply with data protection law. The ICO said that London’s Royal Free Hospital, which worked with DeepMind, was not transparent about how patient data would be used.

Data protection and privacy laws are essential to slaying the killer robots of the future, because they will restrict the proliferation of rogue AI programs that could be detrimental to humans, countries or organizations — while ensuring the technology is put to proper use. Data Protection Impact Assessments (DPIAs) are used to assess the risks associated with using personal data, and play an integral part in developing new systems that will use new technologies.

The golden goose

The AI industry is expected to generate over $1 trillion in business value this year and almost $4 trillion by 2022. Any significant doubts over its ethical implications could have significant consequences for organizations considering putting AI to work.

Introducing a DPIA at the beginning of any AI project can mitigate risks and address ethical concerns during the design phase — turning a potential AI horror story into a privacy fairy tale where we all live happily ever after!

About the author

Deborah Dillon

Deborah Dillon

Head of Protection and Privacy, Atos Consulting

Deborah Dillon is Lead Auditor, Business & Platform Solution for Atos UK&I. She specialises in Information Governance, including the application and implementation of Data Protection processes and procedures across a wide range of organisational areas. She has spent many years working in the field of Data privacy in both Private and Public sectors. Having learned key skills in setting up the Privacy and Records Management function across a large UK bank, Deborah has trained teams of Information Champions to undertake key Privacy Work at local level. Deborah came to Atos Consulting and set up the GDPR practice in IGRC and has helped clients with GDPR compliance, from running bespoke GDPR readiness assessments to the management of large scale GDPR programmes. She is a BSI accredited ISO 27001/2 Lead Auditor.

Interested in next publications?

 

Register to our newsletter and receive a notification when there are new articles.

Thank you for your interest. You can download the report here.
A member of our team will be in touch with you shortly