Managing AI and machine learning model risk - PwC’s AI Quality Assurance Framework

02 May 2018

Artificial Intelligence (AI) and machine learning are increasingly touching our lives every day through medicine, social media and banking. At PwC, we’ve delivered several AI solutions that provide augmented decision making to help our clients act with speed and precision at scale.

While AI and machine learning is exciting and presents many opportunities for businesses and society, it’s also important to be aware of potential areas of risk in your AI systems and plan ahead of time.

As decision making becomes increasingly augmented or automated, it’s crucial that businesses understand the models that help them achieve these incremental efficiency gains. This means appreciating both the customer and wider societal impact of a model and what it means for the business when the model fails to perform. Model risk is an increasingly important area that needs to be addressed, especially when decisions involving financial risk, environmental impact or even human life are on the line. There’s a need to understand the various performance metrics of the models including the bias in the data, the relative strengths of algorithms used, and generalisability of the model on unseen data. These metrics are important signals to assess model quality and fit, which in turn point us towards model suitability.

While the governance structure used for standard statistical models can be used for machine learning, there are a number of additional elements of software development that must also be considered. The tests that machine learning models go through needs to be significantly more robust.

A machine learning governance quality assurance (QA) framework helps us to be aware both of the statistical and software engineering constructs that the model operates within.

To understand what the industry expects from a QA framework around engineering standards, we have partnered with BSI (British Standard Institute). PwC’s data science and AI engineers acted as technical advisors for an industry consultation piece that included a quantitative survey on AI standardisation. This is to ensure the highest priority areas on standards were being addressed and covered the right topics. PwC’s AI experts also recently provided oral evidence to the All Party Parliamentary Group (APPG) around ethical use of AI. Both these pieces of work have cemented our belief in the need for an AI specific quality assurance framework while working with our clients.

We have addressed this through our new proprietary AI QA Framework, which focuses on the common issues we often come across when working with clients on AI projects. These include:

  • Problem formulation that either doesn’t address the business need or is not reflective of the statistical properties of the underlying dataset;
  • Using an inappropriate technology stack to suit both short- and long-term business needs;
  • Inappropriate performance metrics that don’t provide complete insight into possible underlying algorithmic issues or address the business requirement;
  • Insufficient testing before model deployment;
  • Unintended impact on customer and society.

Once these issues are addressed, the model is ready to be signed off and can then be moved into production, although it’s important to note that the QA process does not end there. The transition and roll-out must take place within a robust governance structure and due to the adaptive characteristics of these systems it’s important that a model validation exercise is undertaken regularly once in production. Further review of the model also becomes critical if abnormal results are spotted or there is a continuous feed of erroneous results.

In summary, a well-managed AI QA framework is essential to maximise the benefits and minimise the risks when developing and deploying this ground-breaking technology. It provides both us and our clients the confidence to deliver and deploy robust and re AI solutions.

 Sudeshna Sen | AI Strategy

 

 Aldous Birchall | Artificial Intelligence FS Consulting Lead

 

Comments

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Comments are moderated and will not appear until the author has approved them.