The UK’s new world-leading AI Centre needs to work for people and the planet

29 January 2018

by Benjamin Combes, Assistant Director, Innovation and Sustainability

The latest World Economic Forum annual meeting saw UK Prime Minister, Theresa May, put down a marker by announcing to the global business and political leaders in Davos that she wants the UK to lead the world in developing safe, ethical, and innovative Artificial Intelligence (AI). Her Davos speech built on the announcement of a new advisory body in the UK - the Centre for Data Ethics and Innovation - in last year’s Autumn Budget.

While experts differ as to the extent of potential downsides as today’s AI explosion gathers pace, there are clearly areas where, left unguided, the unintended consequences of AI could have potential negative effects on our economy and society. For example, impacts ranging from automation-drive job losses, eroded tax bases, technology-driven deflation, and algorithmic biases, all areas we explore in our Enabling a Sustainable Fourth Industrial Revolution policy brief written for the G20 meeting last year.

More broadly, grouping AI risks can be helpful. We have recently categorised these into six areas: Performance Risks (e.g. errors, biases etc.); Security Risks (e.g. cyber intrusion, privacy); Control Risks (e.g. rogue and malevolent AI); Ethical Risks (e.g. lack or values, goal alignment); Economic Risks (e.g. job displacement, liability and reputation); and Societal Risks (e.g. autonomous weapons and intelligence divide).

There are various approaches that the UK’s nascent centre can take to mitigate these risks. The first, and most important of which, will be to develop sophisticated governance structures for the new AI-enabled digital economy, which will include - but not be limited to - creating clear and comprehensive responsible technology policies, better data environments, regulation for “black box” AI models, algorithmic assurance, and minimising systemic biases. We predict that the pressure for “responsible AI” will expand beyond tech companies alone as principles, and self-regulatory solutions, emerge for organisations, while regulators catch up.

The new Centre must move quickly, therefore, to manage and mitigate the full range of risks identified. It needs to be bolder, however, and seek to harness the broad range of opportunities that AI brings. Early economic reports on the impacts of AI focus on the potential upside for the macro economy. For example, the Chancellor Philip Hammond stating at Davos that AI has the potential to double economic growth by 2035 in the UK and other advanced economies. At a global level, our research shows that global GDP could be as much as 14% higher in 2030 as a result of AI.  While economic growth is important, we must also think better about how to maximise the gains of AI for our society and our environment at large.

As the scale of the economic and human health impacts from our deteriorating natural environment grows, it is becoming increasingly important to extend the field of AI safety to incorporate “Earth-friendly” AI.

As the new report Harnessing Artificial Intelligence for the Earth, launched at Davos by PwC and the World Economic Forum shows, the AI opportunity for the Earth is significant. The report focuses on the use of AI in the context of six critical global challenges: climate change; biodiversity and conservation; healthy oceans; water security; clean air; weather and disaster resilience.

We have identified over 80 emerging AI applications for Earth challenges, which are explained and examined in the study. Across the six challenges, these include:

  • Climate change: smart agriculture, nutrition and food systems; optimised energy grids; autonomous and connected electric vehicles; climate and weather modelling.
  • Biodiversity and conservation: pollution control; plant species identification; precision monitoring of ecosystems; illegal trade monitoring and response.
  • Healthy oceans: robotic fish to fight pollution, real time monitoring of ocean temperature and PH; coral reef mapping; illegal fishing monitoring and response.
  • Water security: simulations for drought planning; drones and AI for real time monitoring of river quality: streamflow forecasting; decentralised water grids.
  • Clean air: pollution forecasting for transport management and early warning; air pollutant source detection; air pollutant filtration.
  • Weather and disaster resilience: improved early warning systems for weather and disaster resilience; automated mitigation of flood risk; real time risk analytics for first responders.

It is, therefore, increasingly possible to tackle some of the world’s biggest challenges with emerging technologies such as AI. Governments can help by encouraging research and funding collaboration on “AI for good”, connecting industrial, academic and government research agencies. Innovative finance mechanisms and partnerships will also be needed. These could include government-backed innovation incubators, accelerators, patient (and concessional) capital, funds and prizes, to enable scaling of tech solutions for the public (including environmental) good.

In announcing its goal to the world, I would challenge the UK Government to stretch its ambition: create a Centre for Data Ethics and Innovation that is not only a world leader on getting AI to work for people, but for the planet too.    



Benjamin Combes
Assistant Director