No model, no model risk

20 July 2018

As large banks extend model governance to cover a wider range of business areas, counterintuitive & counterproductive incentives can be created. The problem arises where pure expert judgement processes could be used as alternatives to models.

In some business areas, notably Treasury, important metrics are often determined through sequences of expert judgements without any modelling components; such “expert judgement processes” would not typically be in-scope for model risk. If the business then attempts to increase the rigour of its calculation by introducing modelling components, the process would become in-scope for model risk, and the business would face stricter controls and might even be penalised in the form of model risk capital charges. This can create disincentives to the very behaviour which Model Risk functions generally seek to promote (e.g. using data driven models rather than pure expert judgement).

Background:

The high standards of model governance that have traditionally applied to pricing (and some capital) models are currently being rolled out to a wider set of model types by most large banks. This is largely driven by regulation, with SR11-7 standards proliferating across the industry and with specific requirements for model validation appearing in several places (for example in the recent IRRBB Basel standards, or the PRA’s SS 3/18).

This focus on model risk per se is quite natural in investment banking, where the role of a model is relatively clear (e.g. versus the role of a trader’s judgement). More broadly, there may be a concern that model outputs might be used by decision makers who do not necessarily understand the modelling nuances, and so would by themselves be unable to factor model uncertainty into their decision making. This would not be the case for expert judgements, which (presumably) the business decision makers should be able to appraise without assistance from a validation team.

The problem:

As noted above, there may be functions within a bank (for example Treasury) in which the role of a model is more ambiguous than it is in investment banking, and in which models and sequences of expert judgements may at times be alternative methodologies.

A business unit seeking to decrease “model/process risk” might add a quantitative modelling component as a comparative check to a process of expert judgements. However, this business unit would now find itself having to devote resources to Model Risk Management (eg for documentation or monitoring), and might even face capital charges for the very model uncertainty it was attempting to decrease. Such a business unit might perceive themselves as being punished for “doing the right thing”.

This “penalisation” for an increase in rigor occurs because the introduction of a model moves the process from being an uncontrolled methodology (expert judgement) to being a controlled methodology (a model). If there is no model, there is no model risk.

Potential solutions:

This issue might be resolved by incorporating both models and expert judgement processes within a single control framework.

One possibility would be the extension of the definition of a model to include expert judgement processes. Though this would put the two on an equal footing, it might cast too broad a net, over-expanding the model inventory and placing a heavy burden on both business and model risk.

An alternative approach would formally define and recognise expert judgement processes (as distinct from models) within the model risk policy. Then the cases in which a model might have been used as an alternative to an expert judgement process might be called out for approval by validation, with the usual standards of documentation, monitoring etc.

Finally, the model risk policy might be embedded in a wider, more holistic, risk framework with a focus on controlling the uncertainty around outputs more generally.

A wider issue:

So far we have focused only on the line between models and expert judgement processes; however this may be seen as part of a broader issue. For example, a similar question arises when deciding whether a quantitative process should be classified as a “model” or a “tool”, the key difference being the reliance on assumptions. In general, the label applied to a process should not lead to a lack of appropriate controls.

 

Yousef Ghazi-Tabatabai

Yousef Ghazi-Tabatabai | Principal Consultant
Profile | Email | +44 (0) 7841 803637

Twitter
LinkedIn
Facebook
Google+

Comments

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Comments are moderated and will not appear until the author has approved them.