A journey towards responsible AI in financial services

The topic of ethical and responsible Artificial Intelligence (AI) in financial services has risen in popularity over recent years, partly due to media attention regarding bias and discrimination inherent in AI models, but also because it is a topic gaining increasing focus from regulators and national governments. I recently had the pleasure and privilege of participating in an expert panel debate at the @AFME (Association for Financial Markets in Europe) Conference in Paris. A truly stimulating debate covering a number of key themes and challenges around AI in financial services.

Explainable and Interpretable AI

Explainability was a main topic of the panel and it relates to how AI systems reach outcomes, and how the rationale for these outcomes can be explained. Understanding how and why a decision is made is often as critical as the accuracy of the result and this is something regulators are increasingly expecting firms in the industry to focus on. The FCA has signaled that firms should focus on achieving ‘sufficient interpretability’ of decisions made by AI. This means firms must balance the effective use of AI with the need to articulate the main drivers of the decision-making process, particularly where the outcome impacts consumers or financial stability.


Secondly, the panel acknowledged the need for AI accountability and the identification of who can be held responsible for a decision, action or strategy determined by an AI model. The PRA have stated that as firms continue to adopt AI they will need to review how individual responsibilities are allocated for AI, including under the Senior Managers and Certification Regime (SM&CR). The FCA has also called for accountable individuals under SM&CR to ensure they are able to explain and justify the use of AI systems. In practice, this is likely to mean that boards as a collective, as well as individuals in scope of the SM&CR, will have to show they understand their firm’s use of AI. This includes being able to evidence the ‘explainability’ of their firm’s use of AI and understanding its involvement within their own business units, where applicable.

Data quality

Lastly, data quality and completeness is crucial for “responsible” and “explainable” AI - without good data the conversations around model interpretability, bias, controls, etc. become irrelevant. A key question could easily be “was this mis-prediction of a model due to bias and algorithm flaws or due to poor data?”. With an ever increasing reliance upon data the need for adequate protection and security has never been more important. For AI systems in general and specifically common applications such as the development of ML models, the use of large datasets to train and operate is vital to their success. In the UK, the Information Commissioner’s Office (ICO) considers AI a priority area and has dedicated substantial resources in this field in terms of research and expert working groups.

Take away

Organisations need to be equipped to address those key themes associated with AI in financial services while demonstrating ongoing governance and regulatory compliance. At PwC we believe a holistic and integrated approach is key to successfully designing, building, embedding and scaling AI and ML into your business in an ethical, accountable and responsible manner. Our Responsible AI Toolkit addresses five dimensions of AI applications. AI solutions must be ethically sound and comply with regulations in all respects. They should be underpinned by a robust and holistic foundation of end-to-end governance to address the accountability of all stakeholders involved. AI models need to address issues associated with bias and fairness. AI systems should be interpretable and easily explainable by those who operate them and by those who are affected by them. Lastly, a robust performance and safe use of AI must be enured.

To turn the conversation on explainability, accountability and data quality our Responsible AI Diagnostic tool can answer questions on AI implementation and readiness and it takes into account the following: the possible ethical implications of the use of AI in your organisation, what measures to fully evaluate the risks associated with AI, and the organisation ability to deploy AI at scale in a robust and secure manner. The Responsible AI Diagnostic provides a score for the organisation’s performance relative to your industry and a set of recommendations for actions.

For additional content on AI accountability and AI explainability please see our other two blogs: With great computing power comes great accountability and Trust me, I’m a robot - Explainable AI in financial services. We will shortly be publishing a report focusing on regulatory expectations for financial services firms utilising AI - so watch this space. 

Leigh  Bates

Leigh Bates | UK FS Data Analytics Leader, PwC United Kingdom
Profile | Email | +44 (0)7711 562 381

Maria Axente

Maria Axente | Artificial Intelligence Programme Driver and AI for Good Lead, PwC United Kingdom
Profile | Email | +44 (0)7711 562365

Read more articles on