Practical AI ethics consideration in the enterprise

May 11, 2018

0 comments

Aldous Birchall

by Aldous Birchall Head of Financial Services AI/Responsible AI co-lead

Email

 

I recently presented evidence in Parliament to the All-Party Parliamentary Group on AI ( APPG AI) meeting on Ethics and Accountability. My evidence was focussed on what I'm seeing first hand in industry.. I hope it can help bring some insight into the practical issues we’re seeing around AI accountability and ethics, and also help develop some practical solutions.

Machine learning offers a fundamentally new way of developing software which moves technology to the heart of the enterprise. However, the ethical dimension has often taken a back seat as companies struggle to adapt to rapid change. To address these issues, I stuck to questions asked of speakers by the committee.

  1. How do we make ethics part of business decision-making process?
  2. How do we assign responsibility around algorithms?
  3. What auditing bodies can monitor the ecosystem?

How do we make ethics part of business decision-making processes?

In my experience, engineers are focussed on delivering well-defined functional requirements, and business managers on business metrics and regulatory compliance. Concerns around algorithmic impact tend only to get attention when algorithms fail or have a negative impact on the bottom line.

Machine learning makes the development and deployment of decision-making software a lot easier than it used to be, so in some respects this problem will only increase. Because AI software is inherently more adaptive than traditional decision-making algorithms, problems can unfold with quicker and greater impact. There needs to be a far greater awareness amongst data scientists, machine learning engineers and business managers of the impact badly designed software can have on society. Insufficient governance and quality assurance around this technology is inherently unethical and needs to be addressed at all levels of the organisation.

So what can be done?

  1. Machine learning and AI courses should include sections on both ethics and how algorithms interact with society.
  2. Business managers need to understand and be held accountable for the risks.
  3. All AI deployments that interact with the public should contain a documented ethics review and impact analysis.
  4. Where appropriate, a formal mechanism that aligns a company’s technology with its ethical policies and risk appetite may be necessary.

How do we assign responsibility around algorithms?

To assign responsibility for an adverse event caused by AI, we need to establish a chain of causality from the AI agent back to the person or organisation that could be reasonably held responsible for its actions. Depending on the nature of the adverse event, responsibility will sit with different actors within the causal chain that lead to the problem. It could be the person who made the decision to deploy the AI for a task to which it was ill suited, or it could rest with the original software developers who failed to build in sufficient safety controls.

In practice, what does this mean?

  1. To define 'reasonableness' we will need broad acceptance of norms and standards around AI engineering, monitoring and safety.
  2. To establish causality, we may need more formal methods of registering responsibility for development, and ownership of AI software that interacts with, or can impact the public.
  3. Ownership may come with other use case specific responsibilities. Self-driving cars may need regular software checks as well as the current mechanical checks.

What auditing bodies can monitor the ecosystem?

For now, the AI ecosystem is concerned with 'narrow AI', that is software agents that are capable of handling a well-defined set of tasks. In commerce, these tasks, by definition, usually sit within a particular industry. AI has the potential to generate negative externalities specific to that industry which in many cases will involve quite complex, domain specific dynamics. Not only that, what may be acceptable in a healthcare setting may be unacceptable in a bank (for example the use of personal information will differ across medical and financial contexts).  So, in my view, monitoring should occur at industry level except where there are pressing health or security issues that need direct government supervision.

In practice, this means that existing regulatory and professional bodies need to take responsibility for monitoring the AI ecosystem in their own sectors. This of course means that many of these bodies will need to acquire the requisite expertise to effectively discharge this duty.

Also, we should not downplay the importance of professional and academic bodies representing the technology community (for example ACM, IEEE) who have an important role to play as technical experts and help guide policy. However the current risks from AI come from its deployment in specific industrial contexts, and current regulatory systems are generally far better placed to monitor, and sanction their constituents.

As AI technology increases in sophistication, we may need a technology rather than industry focussed body to manage the risks around 'general AI'. There are already a number of academic and industry groups focussing on these risks that could potentially form the nucleus of a dedicated regulatory body.

Summary from the evidence meeting report prepared  for All Party Parliamentary Group on AI ( APGG AI)

 

 

Aldous Birchall

by Aldous Birchall Head of Financial Services AI/Responsible AI co-lead

Email