In AI we trust: Making the case for responsible artificial intelligence
It starts – and ends – with trust. To be successfully adopted, new technology must have the confidence of those it affects.
Artificial intelligence is no different. To engender trust in AI, organisations need to take a responsible approach. That means tackling bias, overcoming unfairness, resolving ethical dilemmas, and making it explainable.
One of the highest profile areas that will provide the first real test of ethics, fairness and trust in AI, will be autonomous vehicles.
AI will correlate and analyse vast quantities of real-time data from the vehicle and its surroundings to navigate smoothly and make critical decisions quickly. But how can people be reassured AI will make the right decisions? And what is the right decision?
There is an ethical dilemma. For those designing algorithms that will run autonomous vehicles, the 1967 ‘trolley problem’ is the lodestar. How can a system be designed that is capable of weighing ethical decisions in the event of an inevitable crash especially when lives are at stake? Who is in control, or at fault: the data scientist; the car; or the ‘driver’?
Bias is another issue to be overcome with AI. Take the example of employers using AI to automatically analyse and filter CVs, where an employer may be entrenching bias based on the make-up of the existing workforce. This will fail most people’s test of fairness. It is also a complex issue, which is why we say making AI explainable and easily understood is crucial for building trust.
It is not possible to remove bias entirely. But it is possible to be aware of unintended bias and mitigate accordingly. For example, a team of data scientists that better represents the cultural, ethnic and gender make-up of society at large is likely to make more inclusive decisions.
If there are fairness and ethical issues with the use of AI in recruitment there is a huge challenge in giving society confidence that the technology is capable of safely driving cars.
According to the AI Predictions survey, 47% of organisations do test for bias in data, models and human use of algorithms. That’s a solid number but it suggests over half still neglect - or are unaware of - the negative effects of bias.
One way to add substance to ethical decision-making is to align it to company values. Taking a responsible approach to AI is not only the right thing to do for clients, customers and society, it’s the right thing to do for business. Trust is transactional, a quid pro quo between company and customer. It’s easier to gain trust if the customer can see benefit, by improving the customer experience, for example. Transparency underscores trust. So does tackling bias and establishing an ethical code.
That’s why we are helping organisations not only understand and exploit technologies such as AI but also align it with their business objectives and social and ethical responsibilities.
Our research estimates AI could contribute $15.7 trillion to the global economy by the end of the next decade. Those that earn trust will prosper, but to earn that trust they must address complex ethical and cultural issues, build belief in the effectiveness of the technology and provide society with clarity and confidence over who is in control.
You can explore these issues at ‘Driverless: Who is in control?’, a free exhibition at the Science Museum in London. PwC is sponsoring the exhibition as part of our focus on promoting the responsible, ethical use of AI in business and society.