Liability and AI: how to overcome the risks in financial services
16 July 2020
Who is to blame when your Artificial Intelligence (AI) system goes wrong? You thought you did everything right when setting it up. You established robust systems and controls, and set out who was accountable for its oversight. But now the regulators are knocking at the door and clients are demanding compensation. Is your firm at fault, as the deployer of the system? Is it the individual overseeing it, or the manufacturer of the system? Perhaps, having evolved over time to make autonomous decisions that nobody could have predicted at the design stage, it could be the system itself.
With no specific legal framework in place for allocating responsibility for new technologies, firms are struggling to navigate regulatory expectations. In May, the European Parliament proposed a risk-based approach for AI liability, which categorises certain systems as ‘high risk’. It also recommended a strict liability regime for the deployers of these systems, where liability exists despite the absence of fault. Under these proposals, a company using a high-risk system would be held fully liable, irrespective of the level of due diligence carried out. While the report does not currently identify any use cases for financial services firms as ‘high risk’, the list is likely to evolve as new systems and the sectors in which they are deployed are identified.
In the UK context, James Proudman, Senior Advisor at the Bank of England (BoE), noted last year that firms should be considering how to allocate responsibility for AI at the individual level, including under the Senior Manager & Certification Regime (SM&CR). Irrespective of approach, there are two key issues that firms should engage with when considering liability in the context of their governance frameworks.
First, firms need to be able to explain how their AI systems work in order to trace the decision-making process, and identify where and how processes have led to poor outcomes. The level of detail required will depend on the use case and the stakeholders concerned. For example, the inability to explain a system used to determine insurance coverage, or loan approvals, is more likely to trigger regulatory fines - given that they operate in a retail environment. In these instances, evidencing that effective controls were put in place around the algorithm will be essential in limiting liability.
Firms should consider deploying a ‘human in the loop’ approach to oversight, to allow for the intervention and overriding of AI-based decisions if necessary. Undertaking regular performance reviews, and maintaining accurate records of this, will also be important in evidencing that reasonable steps have been taken to reduce risk - a key obligation for senior managers under the SM&CR.
Secondly, firms should ensure that an adequate data security framework is in place in order to meet their obligations under GDPR - particularly when it comes to processing personal data. The use of AI adds a new, high risk angle to liability, given that an AI system’s ultimate objective is to execute decisions autonomously, and therefore without explicit consent. The Information Commissioner’s Office (ICO) draft guidance on the AI Auditing Framework reminds firms that they should undertake Data Protection Impact Assessments (DPIAs) before deploying AI systems, to ensure that any processing of personal data is compliant with applicable privacy law. Carrying out a DPIA can also help firms identify ‘high risk’ use cases, and the potential liabilities associated with them.
These are just two aspects of liability that firms need to consider when deploying an AI system, and we expect further regulatory guidance to evolve rapidly. In the meantime, firms using these systems are the main point of contact for customers, so it is incumbent on them to demonstrate to regulators that they are using AI responsibly and have adequate processes and procedures to mitigate risk. Those that don’t are in for a bumpy ride - and potentially some hefty fines and compensation claims.