Regulatory technology insights

02 July 2018


We were recently invited by one of our banking clients to participate in a panel discussion at their annual Fintech innovation week. We shared views on emerging technologies, the impact they’ve had on operations and compliance, and how they’re helping the sector provide valuable services to customers.

The main theme of our discussion was advanced data analytics, artificial intelligence and machine learning, and how regulatory developments, and compliance costs are influencing the rapid adoption of these technologies. We’ve summarised a few of the main points in part one of this two part blog.

How can emerging technologies (eg. machine learning (ML) / artificial intelligence (AI)) impact the implementation of regulation?

There continues to be so much interest surrounding the adoption of AI. Careful consideration however is needed to assess if AI or ML is the right solution. A more reasonable approach might be to view ML as one tool to bring efficiencies to existing business processes.

Although ML techniques have been tested and proven for some time, there are some fundamental considerations that will impact results:

- Stationary data: As most data in financial services (FS) is not stationary, collecting more data is not always helpful;
- Signal-to-noise ratio: noise is usually considered as a nuisance and can be easily inferred (at least in theory). In FS however, noise is an integral part of what we need to compute. This also means that the signal-to-noise ratio is much lower, as a result, noise filtering is an important component of many financial ML models;
- Interpretability of results: A lot of the recent success in ML is due to the use of Deep Learning (DL). However, it’s difficult to interpret and explain the decision process. The recent introduction of GDPR requires any algorithmic output involving PII data to be explainable. For more information on explainable AI read PwC’s white paper.

Who takes on the risk when creating ML models? Compliance, IT, data scientists or the business functions?

Even though the FCA and other global regulators have promoted the use of innovative solutions, including AI and ML, the risk of deploying them of course ultimately lies with the business.

There are many validation steps data scientists should consider to reduce risk. For example, appropriate checks for avoiding bias (using precision-recall metrics, measuring model performance in different sub categories instead of focusing on a single number, avoiding model drift in production).

However, there are risks inherent within the ML process itself, which should be managed. The low 'interpretability' of modern ML techniques is the main driver of these risks, which can be caused by:

- Bias: training ML models might introduce unintentional hidden biases, mainly emanating from the data provided to train the system;
- Lack of verifiability:  Traditional software systems are built based on logic, and hence can be rigorously tested to provide expected behavior. ML models are based on statistical truths, rather than literal truths, it can therefore be difficult to prove that a given system works in all cases;
- Difficult to debug: Diagnosing and correcting a ML solution can be difficult, mainly since its underlying reasoning might not be fully understood.

What is ML bias and can it be avoided?

The reality of using ML models is that the environment and data collected is constantly changing. Algorithmic biases can creep into the ML model, mainly from the data it’s being trained on. This can be of various types, including interaction bias, latent bias, and selection bias, and can be tricky to isolate. Unintentional demographic bias in customer modelling is an example where algorithms may use a combination of demographic characteristics to determine product suitability. The results however could be discriminatory. Proper care must therefore be taken while designing ML algorithms to prevent such biases, and consider the following:

- Adequate time and resource to prepare relevant data (of the right accuracy and completeness)
- Adoption of basic and advanced algorithms
- Iterative process design
- Evaluating models across data and categories
- Scalability
- Ensemble modelling

In part two of our regulatory technology insights we will discuss the view from the regulators and central bank, their progress around the adoption of emerging technology and what the next 12 months might hold in implementing new techniques and the results we may expect.

Leigh Bates | Partner, FS Data & Analytics

Aysegul Kazdal | Data & Analytics Manager