The future of AI through our children
03 May 2017
It’s becoming increasingly clear that the recent flood of technological breakthroughs, principally the growing maturity of AI, are going to fundamentally reshape our world and how we live and work in the future.
And how we collectively respond will have profound implications for future generations.
Some of the planet’s greatest minds are working tirelessly in an attempt to steer us through this labyrinthine challenge. It’s also high on the government’s agenda. The new All-Party Parliamentary Group (APPG) on AI has been set up to bring different people together to debate, educate and inform future policy. I’m privileged to sit on the advisory board of this group and consulted extensively with our experts whilst preparing our evidence for the launch meeting, which posed the question: What is AI to you?
But it occurred to me in all the consultation that we’re doing - have we actually spoken to our future generations who will be most impacted by these changes?
A few weekends ago I turned to my 7 year-old daughter and 5 year-old son and asked them: “What is AI to you?”. After a few seconds of utter bewilderment and serious concern for Daddy’s welfare, my daughter said “But what is AI?”. The best that I could come up with was to describe it as ‘the brains of a robot’.
This subsequently led to one of the most inspiring conversations I’ve ever had on the ethical considerations of autonomous systems. I was forced to think hard about how to make the subject accessible, inclusive, and engaging.
The following morning they handed me a piece of paper that took my breath away and led to me producing the following tweet:
I’ve been stunned by the online debate that has since ensued. It’s attracted glowing praise and gentle challenge from hundreds of people around the world.
This has taught me an important lesson. The language that experts use to talk about AI is often indecipherable to the general public and this presents the risk of disenfranchisement.
There are vital questions to be asked and to be answered:
- How do we ensure that the benefits reaped through AI are spread equally across all parts of our society?
- How do we effectively measure and plan for the likely significant changes to our future job market?
- How do we equip our children with the skills that give them the best chance of success in an uncertain world?
- And - of most immediate importance - how do we build society’s trust in AI?
We need to ignite a big and inclusive conversation about the fuel that powers the AI engines. The fuel in this case being data. How do we reconcile data privacy with the huge benefits we might accrue from liberating big data? Who owns our data? How do we square important data regulations, such as GDPR, with commercial pressures and IT architectural shortcomings? Of course, alongside these data considerations, we are also moving into a phase of trying to make sense of a raft of broader ethical quandaries. At PwC UK we launched our Responsible Technology approach this week to make sure we’re taking these bigger questions into consideration when looking at technology adoption across the firm.
My experience here has taught me one thing: When trying to answer complicated questions, seek the advice of your children.
It is their pure untainted wisdom that distills clarity from complexity.
And with this clarity, we will do our best to build a future for them that matters.
In my role at PwC, I spend every day talking about AI with colleagues, major technology companies, start-ups, our clients in all industry sectors, policy-makers and the rich tapestry of interested stakeholders. I’d love to hear your views - please do get in touch.
Originally posted as part of techUK's AI week here.