Artificial Intelligence (AI) and the GDPR - Part one

January 07, 2019

0 comments

by Emily Sheen Manager, Data Protection Strategy, Legal and Compliance Services, PwC United Kingdom

Email +44 (0)7561 788941

by Ningxin Xie Senior Associate (Lawyer) - Data Protection Strategy, Legal and Compliance Services, PwC United Kingdom

Email +44 (0)7421 828154

‘Computer says no’!

Over twenty years ago, the ‘Little Britain’ comedy sketch show highlighted the deep frustration felt by individuals when faced with the inexplicable decision making of a computer. A young child is denied an operation, a bank customer a premium account service, all with no other explanation given other than…’the computer says no’!

Although written for comic value, the ‘computer says no’ syndrome presents a real risk for organisations when developing Artificial Intelligence (AI) technologies to carry out activities such as profiling and automated decision making. Organisations that bring their customers or service users along with them on their journey to new technology development will ensure they maintain trust, engagement and ultimately build customer loyalty in their brand or service. Avoiding the scrutiny of the regulator is another very good reason for putting the rights of the individual at the heart of any new developments. In this blog we will explore some key data protection issues that we believe organisations should be considering when developing their AI technologies.

How is AI being used by organisations today?

Although there is no widely accepted, formal definition of AI (according to the recent Government report on AI in the UK), the following examples illustrate some of the ways in which AI technologies are being utilised as part of modern business practice.

  • Companies use algorithm and machine learning to increase the relevance of advertisement sent to individuals;
  • The insurance industry uses big data to make the application process easier and, algorithms on price optimisation to better distribute risk;
  • AI is used in the employment sector to process CVs and select candidates;
  • Companies are helping governments build smart cities using AI; and
  • Local Authorities are using big data to help identify potential victims of child abuse.

Further to these developments, the UK Government recently published an Industrial Strategy White Paper intending to put the UK at the forefront of AI and the data revolution. Requirements such as transparency, intelligibility of decision-making and, the need to prevent bias are identified as key areas for consideration when developing AI technologies. 

AI and the GDPR: automated decision-making

The ICO outlines how algorithms can be used as a tool for automated decision-making, including profiling, to discover individual preferences, predict behaviours, and/or make decisions that may impact individuals’ rights and interests. The General Data Protection Regulation (GDPR) has put the control over how personal data is used firmly back with the individual.

Article 22 of the GDPR states that individuals have the right not to be subject to a decision that has a legal or similar effect upon them and, that is based solely on automated decision-making (without human intervention). There are some exemptions to this right; where the use of personal data is necessary to enter into a contract, if the processing is authorised by law or if explicit consent is given by the data subject.

However, even when applying exemptions, organisations must still ensure they are protecting (and be able to demonstrate how) the rights, freedoms and interests of individuals. At the very least, they must ensure the right to human intervention if requested and, in doing so, ensure that individuals have not been disadvantaged through this process.

To ensure that any processing of personal data is lawful, fair and transparent, individuals should be provided with specific, clear and meaningful information about how automated decisions are being made about them. Organisations therefore need to communicate the following:

  • (M)eaningful information about the logic involved” and “specific information” about how decisions are made (GDPR Article 13, Article 14 & Recital 71) in relation to any automated decision making;
  • The “envisaged consequences of such processing for the data subject” (GDPR Article 13 & Article 14);
  • “(S)pecific information” about how decisions are made (GDPR Recital 71);
  • How individuals can exercise their “right to obtain human intervention” (unless a clear exception applies) (GDPR Recital 71); and
  • How individuals can express their point of view and obtain “an explanation of the decisions reached” and, how they can “challenge that decision.” (GDPR Recital 71).

In order to avoid the ‘computer says no’ effect and, to meet their data protection requirements, organisations need to plan the implementation of new AI technologies carefully with a specific focus on protecting individual rights. Through our ongoing AI and Data Protection blog series, PwC will be analysing compliance and rights issues in further detail and, identifying some practical steps that organisations can take to ensure these considerations are built into their processes.

We are on the Journey to Code. If you would like to find out more about how PwC can help with your GDPR compliance when developing new AI technologies then please contact Emily Sheen, Ningxin Xie or Stewart Room. Please stay tuned for the second part of this blog post.

 

by Emily Sheen Manager, Data Protection Strategy, Legal and Compliance Services, PwC United Kingdom

Email +44 (0)7561 788941

by Ningxin Xie Senior Associate (Lawyer) - Data Protection Strategy, Legal and Compliance Services, PwC United Kingdom

Email +44 (0)7421 828154