With great computing power comes great accountability
25 June 2019
Scroll through a newsfeed nowadays and it’s difficult to avoid the latest take on innovative technologies such as Artificial Intelligence (AI), machine learning and advanced data analytics. These technological developments are beginning to disrupt the way in which financial services firms operate. A recent report by PwC shows that, while firms are at varying degrees of maturity in adoption, many are now embracing these technologies to transform activities such as risk management, fraud detection and post-trade processes. While these innovations are likely to be adopted by many within financial services, debate is growing around the disruptive power of new technologies and who is ultimately accountable for ensuring they are used responsibly.
It is therefore unsurprising that the UK’s regulators are starting to consider their response to these trends. Bank of England (BoE) Governor Mark Carney noted that the banking industry is already the second largest spender on AI systems across the globe, and in his Mansion House speech noted its growing deployment is likely to change the way in which UK regulators conduct supervisory activity. As has been the case for other emerging priority issues such as climate change and operational resilience, the PRA and FCA are likely to place governance and accountability at the heart of their approach to AI.
This message was reinforced in a speech earlier this month by PRA Executive Director James Proudman, who reflected that the blurring of decision-making processes between humans and AI could lead to difficulties in identifying the root causes of problems, and by extension tracing accountability to individuals. This will require firms to review how they are allocating individual responsibilities, including under the Senior Managers and Certification Regime (SM&CR). The FCA has also called for accountable individuals to ensure they are able to explain and justify the use of AI systems. The topic of AI explainability is something we focus on, here. It is likely that many firms will have work to do to meet regulatory expectations in this area, as data suggests few firms are currently implementing governance changes to support AI implementation.
As AI becomes more embedded in business processes, it will become more vital for firms to evidence that they have taken reasonable steps and acted with due skill, care and diligence. In a recent PwC report (produced with TheCityUK) on operational resilience we found that while the financial services sector has always embraced innovation, the current speed of innovation is not matched by the depth of understanding at a management level. Governance will need to evolve alongside technology, rather than being retrofitted to it. The individual(s) accountable for AI will vary from firm to firm. It may be a Chief Technology Officer, a Chief Data Officer or a new role altogether. In the same way they have with operational resilience and climate change, it is possible that the PRA and FCA will introduce a new prescribed responsibility into the SM&CR to reflect the growing importance of smart technologies. The increasing importance of tools such as AI means firms’ boards will also need to show that they have carefully considered how these developments impact their organisation, counterparts and customers and in turn who will be accountable for any new responsibilities this creates. To fully scrutinise management on the use of technology such as AI will require a robust understanding of the technology at board level - something which may prove challenging.
Accountability for, and governance of, decisions taken by machines is just one part of an increasing focus on technology by regulators in the UK and globally. This focus includes the role of BigTech in financial services, the use of Big Data, and the impact of cloud computing and technology such as Distributed Ledger Technology. In the UK, the BoE is due to report soon on a review of the future of the financial system (including the impact of technology). The first intergovernmental standard on AI was also recently adopted by the Organisation for Economic Co-operation and Development, placing responsible stewardship of AI at the heart of its mission.
Developments in AI are happening at breakneck pace, and for financial services firms and regulators alike, keeping up will prove to be a challenge. Prior to the financial crisis, boards and senior executives failed to get a handle on the management of complex risks, but ignorance can no longer be an excuse. While innovation in AI offers the potential for huge rewards, firms need to make sure they understand, and adequately manage, the risks. It is also important to remember that accountability is just one element of a much wider technological debate, ranging from cybersecurity and data privacy to process optimisation, transparency and everything in between. What ties these threads together, though, is an expectation that firms will set the bar high and adhere not just to the letter, but the spirit of a principles-based regulatory approach - one that puts accountability at the centre of their AI programme.