Transaction monitoring: Why segments and thresholds are never enough

09 May 2018

by Scott Samme Partner and Jeremy Davey Financial Crime Analytics Director

In our last blog we talked about the difficulties banks face with transaction monitoring (TM) for anti money laundering (AML) purposes. Rapid developments in data analytics and machine learning will make TM far more efficient in the very near future – but banks need to get their existing systems in order first.

TM is increasingly making use of machine learning and robotic process automation. This has allowed banks to reduce the number of false positive alerts, better manage risk and allows for a more efficient investigative process. But this isn’t entirely about technology; the benefits won’t be fully felt  – and regulators won’t be satisfied that risks are being managed well – unless the underlying system is solid. This was the thinking behind the New York Department of Financial Services Part 504 transaction monitoring regulation that came into effect in 2017.

There are two significant areas where a more robust approach to TM pays dividends. The first is the quality of data used. Many existing TM systems were put in place more than a decade ago – and were set up to collect data for the products that were offered at that point. Banks have added many new products over the years, but too often the systems haven’t been updated or re-designed to capture the data associated with them.

The second is the rules-based approach that most banks currently use for TM. Understanding the risk attached to each transaction – at a time when money laundering has become more organised with ever more elaborate schemes – is a multidimensional issue, requiring nuance, recognition of the parties involved, and multiple layers of data. Too often these are apparent in many legacy monitoring systems, which rely on a combination of segmentation and applying threshold criteria; existing systems tend to set high-level thresholds based on volume and value.

Customer segmentation, too, tends to be equally high-level – because the more segments you have, the more thresholds you need to maintain within the rules. But this allows for little meaningful distinction based on AML risk. There is a world of difference, for example, between the risk profile of a multinational retailer and that of a local pharmacy – and in fact, the small pharmacy could even be the riskier business in terms of AML. We’ll talk about risk rating customers in more detail in a future blog.

These rules are important, but they’re not enough. Determined money launderers know only too well how they work and are easily able to circumnavigate them. Simply splitting a transaction into several amounts below the trigger threshold, directed through different accounts (a technique so common that those using it have their own label – ‘smurfs’), can be enough to avoid suspicion.

Truly effective TM looks much deeper, examining the people behind transactions, interconnected relationships and unusual behaviour. The best TM approach has four clear elements:

FinancialCrimeTile4(-)_Twitter (1)

  • Rules, in the form of thresholds and segmentation;
  • Network analysis, to examine relationships between parties, including the use of third party data;
  • Behavioural analytics, to identify unusual patterns of transactions and compare customers through dynamic peer groupings; and
  • The use of feedback and machine learning to constantly improve the system.

Critically, these four elements can work together to create a powerful TM system. We’ll explain in more detail in our next blog how this structured approach to TM works in practice.

If you would like to discuss these issues, or the impact of emerging technology or data and analytics on your industry, then contact our Data & Analytics team.

by Scott Samme Partner and Jeremy Davey Financial Crime Analytics Director