Artificial Intelligence (AI) and the GDPR– Part Two
January 24, 2019
In Part One of this series we looked at some of the key considerations organisations should be considering when developing AI technologies to carry out automated decision making that meet the requirements of the GDPR and, maintain individual trust in a product or service. In Part Two we will discuss some of the main requirements in practice and, look at the key trends that are being discussed around the use of AI technologies.
‘Fairness’ and ‘transparency’
The requirement to process personal data in a way that is ‘fair’ and ‘transparent’ is one of the first principles of the GDPR. The complexity of algorithms, the use of big data analytics and the development of machine to machine and deep learning can cause significant challenges for organisations in meeting their transparency requirements.
As indicated in a 2017 PwC survey, 76% of respondents expressed concern about the potential for bias and a lack of transparency through the deployment of AI. As well as promoting the benefits of their new technologies, organisations need to be open about their use of personal data including who will have access to it and, how it will be used to make decisions or to influence people. Imagine the impact when a job applicant is denied an opportunity, makes a request for information about the decision making process, the employer is unable to explain the process and, the applicant then discovers that the algorithm being used had taught itself to prefer male candidates over female, for example. There are not only reputational issues at stake here, organisations could also face legal challenge and regulatory fines over their processes.
AI and privacy by design
Article 25 of the GDPR requires that all data controllers build appropriate safeguards to protect personal data into their design processes. Designing data protection, privacy and rights considerations into all projects or AI development programmes and, defining risks and mitigating actions, is therefore a key process that organisations should follow. By assessing the current and potential future legal, human and organisational impact of any new AI technologies, organisations can avoid investing in technology that is later found to be non-compliant with the law or, that faces complaints and criticisms from individuals, regulators and the courts.
Some practical solutions
Listed below are some practical steps that organisations can take to assist with their regulatory and legal compliance and maintain customer trust and loyalty, by considering the rights of individuals through their design and development processes:
- Undertake a detailed Data Protection Impact Assessment at the very start of any project to ensure that all technical and organisational risks and mitigating actions are identified and addressed where possible. Maintain this assessment throughout the lifecycle of the project;
- Consult with individuals that may be affected by any new developments. How do they feel about your proposed use of technology? Do they raise any specific concerns or issues? Build these into your risk identification and mitigation processes;
- Consider your human rights commitments as outlined in any organisational code of ethics/company vision or strategy and how these could be impacted by any new AI technology development. The UK Government has outlined its expectations for business and human rights practice in its ‘Good Business: Implementing the UN Guiding Principles on Business and Human Rights’;
- Ensure you are able to identify and correct inaccuracies in the data being processed, and minimise potential risk by building privacy considerations into your design processes;
- Ensure you have the right level of security and technical safeguards in place to protect personal data against cyber-attacks, data loss, staff misuse and any other security risks you have identified through a risk assessment of your security requirements;
- Identify and correct any bias that may develop through the way in which an algorithm or machine collects and processes data;
- Design policies and tools to improve transparency and protect data subjects’ rights; and
- Communicate clearly with individuals about your use of new AI technology - promote the benefits but also be honest about how the technology will use their personal data and, how decisions will be made about them. Consider developing an AI strategy to help individuals understand how your organisation is deploying new technologies.
Organisations are recommended to keep a watching brief on developments in this fast moving area. Some developments and discussions are outlined below:
- Elizabeth Denham, the Commissioner, is supportive of the recommendation that the Information Commissioner works with the Alan Turing Institute to standardise processes such as algorithmic transparency;
- Giovanni Buttarelli has highlighted the need for self-regulation and the establishment of ethical parameters in the AI era;
- Data ethics in AI could help improve efficiency in mitigating risk, as ethics sets the highest standards for transparency and accountability (Minister’s forward to the UK Government’s Data Ethics Framework);
- New legislation initiatives will fill the gap around the use of non-personal data in AI technologies;
- Developing codes of conduct may be helpful in demonstrating compliance (e.g. code of conduct on AI and data-driven technology in healthcare); and
- The House of Lords Select Committee on AI has produced a report that positions the UK as a potential leader in developing AI technologies and, which outlines principles around how AI should be used ethically and for the purpose of public good and, uphold the data privacy rights of individuals.
We are on the Journey to Code. If you would like to find out more about how PwC can help with your GDPR compliance when developing new AI technologies then please contact Emily Sheen, Ningxin Xie or Stewart Room.