« February 2018 | Main | June 2018 »

2 posts from May 2018

09 May 2018

Transaction monitoring: Why segments and thresholds are never enough

by Scott Samme Partner and Jeremy Davey Financial Crime Analytics Director

In our last blog we talked about the difficulties banks face with transaction monitoring (TM) for anti money laundering (AML) purposes. Rapid developments in data analytics and machine learning will make TM far more efficient in the very near future – but banks need to get their existing systems in order first.

TM is increasingly making use of machine learning and robotic process automation. This has allowed banks to reduce the number of false positive alerts, better manage risk and allows for a more efficient investigative process. But this isn’t entirely about technology; the benefits won’t be fully felt  – and regulators won’t be satisfied that risks are being managed well – unless the underlying system is solid. This was the thinking behind the New York Department of Financial Services Part 504 transaction monitoring regulation that came into effect in 2017.

There are two significant areas where a more robust approach to TM pays dividends. The first is the quality of data used. Many existing TM systems were put in place more than a decade ago – and were set up to collect data for the products that were offered at that point. Banks have added many new products over the years, but too often the systems haven’t been updated or re-designed to capture the data associated with them.

The second is the rules-based approach that most banks currently use for TM. Understanding the risk attached to each transaction – at a time when money laundering has become more organised with ever more elaborate schemes – is a multidimensional issue, requiring nuance, recognition of the parties involved, and multiple layers of data. Too often these are apparent in many legacy monitoring systems, which rely on a combination of segmentation and applying threshold criteria; existing systems tend to set high-level thresholds based on volume and value.

Customer segmentation, too, tends to be equally high-level – because the more segments you have, the more thresholds you need to maintain within the rules. But this allows for little meaningful distinction based on AML risk. There is a world of difference, for example, between the risk profile of a multinational retailer and that of a local pharmacy – and in fact, the small pharmacy could even be the riskier business in terms of AML. We’ll talk about risk rating customers in more detail in a future blog.

These rules are important, but they’re not enough. Determined money launderers know only too well how they work and are easily able to circumnavigate them. Simply splitting a transaction into several amounts below the trigger threshold, directed through different accounts (a technique so common that those using it have their own label – ‘smurfs’), can be enough to avoid suspicion.

Truly effective TM looks much deeper, examining the people behind transactions, interconnected relationships and unusual behaviour. The best TM approach has four clear elements:

FinancialCrimeTile4(-)_Twitter (1)

  • Rules, in the form of thresholds and segmentation;
  • Network analysis, to examine relationships between parties, including the use of third party data;
  • Behavioural analytics, to identify unusual patterns of transactions and compare customers through dynamic peer groupings; and
  • The use of feedback and machine learning to constantly improve the system.

Critically, these four elements can work together to create a powerful TM system. We’ll explain in more detail in our next blog how this structured approach to TM works in practice.

If you would like to discuss these issues, or the impact of emerging technology or data and analytics on your industry, then contact our Data & Analytics team.

by Scott Samme Partner and Jeremy Davey Financial Crime Analytics Director

02 May 2018

Managing AI and machine learning model risk - PwC’s AI Quality Assurance Framework

Artificial Intelligence (AI) and machine learning are increasingly touching our lives every day through medicine, social media and banking. At PwC, we’ve delivered several AI solutions that provide augmented decision making to help our clients act with speed and precision at scale.

While AI and machine learning is exciting and presents many opportunities for businesses and society, it’s also important to be aware of potential areas of risk in your AI systems and plan ahead of time.

As decision making becomes increasingly augmented or automated, it’s crucial that businesses understand the models that help them achieve these incremental efficiency gains. This means appreciating both the customer and wider societal impact of a model and what it means for the business when the model fails to perform. Model risk is an increasingly important area that needs to be addressed, especially when decisions involving financial risk, environmental impact or even human life are on the line. There’s a need to understand the various performance metrics of the models including the bias in the data, the relative strengths of algorithms used, and generalisability of the model on unseen data. These metrics are important signals to assess model quality and fit, which in turn point us towards model suitability.

While the governance structure used for standard statistical models can be used for machine learning, there are a number of additional elements of software development that must also be considered. The tests that machine learning models go through needs to be significantly more robust.

A machine learning governance quality assurance (QA) framework helps us to be aware both of the statistical and software engineering constructs that the model operates within.

To understand what the industry expects from a QA framework around engineering standards, we have partnered with BSI (British Standard Institute). PwC’s data science and AI engineers acted as technical advisors for an industry consultation piece that included a quantitative survey on AI standardisation. This is to ensure the highest priority areas on standards were being addressed and covered the right topics. PwC’s AI experts also recently provided oral evidence to the All Party Parliamentary Group (APPG) around ethical use of AI. Both these pieces of work have cemented our belief in the need for an AI specific quality assurance framework while working with our clients.

We have addressed this through our new proprietary AI QA Framework, which focuses on the common issues we often come across when working with clients on AI projects. These include:

  • Problem formulation that either doesn’t address the business need or is not reflective of the statistical properties of the underlying dataset;
  • Using an inappropriate technology stack to suit both short- and long-term business needs;
  • Inappropriate performance metrics that don’t provide complete insight into possible underlying algorithmic issues or address the business requirement;
  • Insufficient testing before model deployment;
  • Unintended impact on customer and society.

Once these issues are addressed, the model is ready to be signed off and can then be moved into production, although it’s important to note that the QA process does not end there. The transition and roll-out must take place within a robust governance structure and due to the adaptive characteristics of these systems it’s important that a model validation exercise is undertaken regularly once in production. Further review of the model also becomes critical if abnormal results are spotted or there is a continuous feed of erroneous results.

In summary, a well-managed AI QA framework is essential to maximise the benefits and minimise the risks when developing and deploying this ground-breaking technology. It provides both us and our clients the confidence to deliver and deploy robust and re AI solutions.

If you would like to discuss these issues, or the impact of emerging technology on your industry, then please get in touch with Euan Cameron.