Artificial Intelligence in financial services: An evolving regulatory focus
15 June 2021
Demand for Artificial Intelligent (AI) and Machine Learning (ML) has grown in recent years as the financial services sector undergoes a period of digital transformation. COVID-19 has accelerated this transformation, and half of UK banks see the importance of ML and data science increasing as a result of the pandemic.
UK and global regulators have been focused on the implications of AI adoption for some time. We’ve previously written about regulatory priorities when it comes to AI. The Kalifa Report also identified the need for guidance from the Prudential Regulation Authority (PRA) and the Financial Conduct Authority (FCA) on the application of AI across several areas (accountability, governance, explainability and human oversight).
The topics called out in Kalifa Report are consistent with the themes we’ve been supporting our financial services clients on in recent years. These align with some key focus areas of the Bank of England and FCA-convened AI Public Private Forum (AIPPF), in particular model risk management, data and governance, with a number of recommendations expected from the AIPPF by the end of this year.
The European Commission has also published a proposal for an overarching regulatory approach towards AI, which will impact financial services and many other sectors. In the US, we are also seeing the regulators start to shape their thinking and approach towards AI, with five of the Federal regulators issuing a request for information from firms and the Federal Trade Commission issuing guidance on ‘aiming for truth, fairness, and equity in your company’s use of AI’.
So there’s lots going on in the regulatory space when it comes to AI and while it is understandable that different jurisdictions are all undertaking initiatives in this space, a fragmented regulatory approach across the globe would increase costs and impede effective implementation of AI. There is clearly an important role for international standard setters to play in driving international consistency and as such the Basel Committee and IOSCO’s focus on AI and ML this year is welcome.
The regulatory and ethical implications of AI adoption
The use of AI brings with it a number of unique regulatory, and indeed ethical, challenges. What’s the right level of explainability for decisions supported by AI and ML? Is there a higher bar for decisions which impact consumers? How much understanding should a firm’s board have on the tools being deployed? How can accountability be ensured and bias mitigated? Many firms are grappling with these challenges and looking for guidance from policymakers.
The increased regulatory focus on AI suggests the pressure on firms to get these decisions right will only increase. This means that as firms embark on adoption of AI they need to factor these considerations in, even as the regulatory framework is developing. Firms need the right capabilities to identify, manage and mitigate the unique risks that AI poses, for example in model risk management. The current regulatory focus on AI is likely to result in more guidance but firms will still need to carefully judge the ethical and reputational implications of the use of these exciting tools and ensure they are meeting the outcomes the regulators expect to see delivered.