« September 2016 | Main | November 2016 »

6 posts from October 2016

31 October 2016

Is there Art in Artificial Intelligence?

Picture the scene: the movie executives, producers and the director sit as the audience in a horse-shoe. At the centre, a humanoid robot “pitches” an idea for a feature film. Within seconds one of the producers rejects it as “too fluffy” and the others agree. Rejected. The robot thinks again, and produces another plot on the spot, modified to take the feedback into account. By the end of the one hour meeting, the director has three new scripts in development; the Artificial Intelligence (AI) engine driving the robot has already produced the 120 page drafts.

Sound fanciful? It is. But the bones of this kind of process are beginning to emerge.

The most effective stories follow a well-trodden path. Literary academics since the Ancient Greeks have been classifying stories based on their form and the modern consensus seems to be that there are 6 or 7 basic plots. This consensus has been reinforced by various Big Data analyses including the Hedonometer, which has analysed 1,700 literary classics and grouped them into 6 basic plots based on the patterns of sentiment progression it found in the words. And wherever patterns can be found, AI is one step closer to producing variations on those patterns.

And yet, the idea of a computer producing a coherent story that isn’t simply a “photo-fit” of other stories instinctively feels impossible. Admittedly, based purely on logistics, film is arguably one of the more “complicated” forms of art, so we can assume that AI will struggle to compete in this arena for at least a few years. But the other form of story, the novel, does not have this excuse; it can, after all, be produced by one person, a pen and a piece of paper.

AI Art does have some early runs on the board, producing poetry which has convinced literary critics, images which evoke the old masters. Google’s Magenta project is using image recognition capability in reverse, to create images of a given object or theme. Think of all those times you’ve walked round a modern art gallery muttering “anyone could have produced that” – that “anyone” now includes computers.

The thing that differentiates these early successes from the daunting task of creating an AI “story” is perhaps the degree to which the value is in the “eye of the beholder”. Abstract or poetic works require the observer to make their own interpretation, whereas a story (and to some extent music) has to be much more rigorously structured and comprehensive to be convincing.

The jury is out on whether true creativity will ever emerge from AI. Some believe that AI will never have the “intent” which is, for them, a defining requirement of anything creative. Others say that as soon as a human gives an instruction to an AI tool, there is an “intent” and therefore this test is passed. In the end it hinges on whether creativity simply comes from stirring around a heap of influences, or whether there’s an element of creativity that will never be produced from source data, processing power and computational neural networks.

One thing that is almost certain is that AI, like the screenwriting-bot above, will take market share. Successful AI which produces pleasing images or stories from a mix and match of elements from popular examples will no doubt proliferate. But every AI will need its sales team and programmer, who will live or die by the strength of its ability to select and present the raw artistic material. So perhaps a script-writer might become an editor/agent/brand for his AI system, spending less time sweating over a keyboard and more time developing and honing. This may be simply another example of automation enabling us to concentrate on the truly creative activities.

26 October 2016

Do you have Management Information Confidence?

Many people talk about Management Information (MI) as if it’s a standard, uniform product. What’s easy to forget is that one organisation’s MI looks very different from another’s, even if they’re in the same sector. It’s rather like comparing hairstyles – it’s subtly different for each of us and how we choose to cut, colour and style it is uniquely personal (although I can only dream of those days sadly…)

It’s the same with MI. Every organisation has data – in fact, in today’s world of Big Data organisations are positively overloaded with it – but how each chooses to organise it, analyse it and use it is very unique. There’s no one size fits all for developing MI; every organisation needs and should have its own, tailored approach.

But in a world that’s awash with data, it’s easy to get to the stage where you can’t see the wood for the trees. Big Data, in particular, offers many opportunities to analyse and learn, but it can also waste a lot of time. Collecting and processing data can be a hugely inefficient operation, so it’s critical to be focused and targeted. Too many organisations try to collect as much data as possible or overlook what they have already rather than asking themselves the basic question: What do we need this for? What do I need to report? Just because you can collect data doesn’t mean that you should.

This is why we talk about Management Information Confidence (MIC). It’s about exploiting your reporting to its full potential and creating actionable insight to enable the bold decisions to be made.

It starts by linking KPIs and reporting with business strategy. If the KPI tells you nothing that you need to know about the business and provides no obvious actions, you don’t need it. And you don’t need the data that feeds it. For the data you do need you can then focus on controlling the data supply chain, from the data sources through to reporting including the processes, systems and roles and responsibilities that can impact the quality of the output.

If each time we received a report, it was relevant, we did not have to double check the figures and could take immediate action with what it was telling us, we’d all be more confident – and waste less time- in our decision-making.


17 October 2016

Our Lives in Data: social media risk

Data is everywhere. And social media is just one of the ways we’re creating massive sets of data every day, every minute, every second. From posting a status to clicking ‘like’ – just how much are we sharing about our lives on Facebook? And how much do we want companies to know about us? 

In the start of a series of vlogs from the Science Museum's 'Our Lives in Data' exhibition, of which we are proud sponsors, Phil Mennie, PwC’s social media governance leader, tells us why it’s so important for businesses to embrace and correctly govern their use of social media. Watch the below to find out more:

Interested in using social media governance for your business? Enter your details here


Machine learning and the opportunity to make better decisions, faster


Financial Services organisations make predictions all the time, in all parts of their business. Guessing what the future might hold and aligning decisions with these estimates, is implicitly what most organisations do.

Organisations often take ad-hoc approaches to such predictions, leaving experts and professionals free to apply their judgement. Research has shown that this can lead to a hidden cost of inconsistent and sometimes suboptimal decision making.

As a first step, organisations should become more aware of the important role predictions play and recognise that different categories that exist:

(1) Many important events are so unique that predicting the past provides little guidance on how to deal with them. ‘Superforecasting', an approach to breaking down problems into elements that can be forecasted with greater accuracy, can help insurers prepare for and deal with new and unique events.

(2) Other prediction cases have past data but paradigms are shifting. The recent pension reform in the UK left many insurers grappling to predict client behaviour despite large amounts of pension data collected under the old regime. In these cases we have seen insurance companies apply simulation approaches.

(3) There are also situations where there is plenty of historical data and the prediction target paradigm is stable. In these situations there are a lot of helpful insights hidden in data that can be uncovered by machine learning.

Machine learning is applicable to a surprising number of day-to-day operational decisions in all business, ranging from logistical challenges in supply and demand, to decisions on next 'best actions' in customer interactions.

The benefits of machine learning for everyone

Digitally native companies have perfected the art of using data to improve customer interactions and experiences. So far, the type of powerful machine learning capability developed by these on-line giants has not been easily accessible to most companies.

This is about to change. The recent emergence of a new generation of machine learning platforms is set to make machine learning far more accessible to businesses of all types and sizes.

A platform that we work with, contains a library of the strongest machine learning algorithms and has automated a lot of the specialised data preparation and technology work needed to run and deploy the algorithms in business as usual.

This has helped our clients to prove the business benefits of machine learning in a time frame and at a fraction of the cost.

Human involvement in the collection and engineering of relevant data remains crucial to machine learning success, but this platform allows data scientists to engage better with the business by freeing up their time.

Better decisions in insurance

Insurance is a good example of an industry that relies heavily on expert judgments to assess and manage the risks, employing insurers, underwriters, claim handlers and actuaries. This has worked well, but there is no doubt that there is significant scope to instil more data-driven decisions across the insurance value chain.

Recently, my team assisted the customer service department of an insurance company with increasing cancellations in a profitable product line. The anti-churn campaigns had been based on dealing with cancelling customers in a reactive ad-hoc way. By collecting historic data and using machine learning to make predictions about the ‘likelihood of’ and ‘reason for’ cancellation for each customer, we were able to help intercept before a customer cancelled, and improve retention.

Even in a mature data-rich industry there is still significant potential to improve the quality and speed of decision making. Increased prediction awareness and the emergence of more accessible machine learning technology will make the difference

This blog was originally published on World in Beta here.


10 October 2016

What the ancient Greeks can teach us about investing in AI

Since Icarus and his dad looked at the birds in the sky to inspire their escape from Crete, humans have attempted to mimic the natural world. Two thousand years later, we have engineered ways to swim, fly and travel faster than nature, but are we finally ready to build a machine that can think faster than anything in nature?

Anyone who saw the news of Google’s recent victory over the world champion of the fiendishly complex strategy game “Go” would be right in thinking that we are suddenly making significant progress in this goal. A combination of vast amounts of data and Deep Learning techniques have created a perfect storm for the exponential growth in progress from the fragile neural networks operating in labs just a few years ago, to the real world systems which have already begun to infiltrate our daily lives. Every time you use a search engine or the voice control technology on your phone, you are benefitting from the explosion in Artificial Intelligence (AI).

Of course, the challenge of taking any product from controlled environment (such as in a lab, or a rule-based game like Go) into the real world is roughly proportional to the complexity of the interface. Whilst optimising a bucket to successfully operate in the real world is relatively straightforward, the list of testable attributes of a complex machine, let alone an infinitely complex structure like the brain, quickly becomes overwhelming. And, although AI systems can continuously learn and self-correct, the initial learning phase will have to be careful managed to minimise damage along the way. The consequences of not doing this can range from embarrassing to even potentially fatal.

Broadly, there are two ways that an AI system can improve through feedback and learning: “supervised” (seeing defined rules applied to real world examples) and “unsupervised” (deducing rules based on observed inputs and associated results). IBM’s AI approach seems to be focused around the former, seeking to solve business problems by studying “case histories”, whilst Google AI uses the latter, learning based on huge volumes of data and associated outcomes. The potential for any system that combines both of these is difficult to imagine, but there is an even higher level: instead of human versus machine, consider human plus machine. We watch with interest the situations where humans and AI are working together, for example, in the worlds of medicine (diagnostic imaging) and business (such as Deep Knowledge Ventures, who have appointed a machine learning program to their board of directors).

Such technology is already making its mark in Professional Services in AI tools such as ROSS and Kira, which are being used to perform advanced analysis at a fraction of the traditional cost. It is only a matter of time before M&A players start using AI: to improve target identification; to increase the depth and speed of diligence analysis; to mine post-deal opportunities.

And of course the AI industry itself continues to attract interest from investors – while Tech as a sector flat-lined in 2015, deal activity in AI continued to grow and has quadrupled between 2010 and 2015. However, whilst AI will no doubt take us into a new realm of capability, when investing, as in the case of Icarus, we need to ensure that we do not allow hubris to overtake us.


04 October 2016

Data: Are we speaking the same language?

Britain and America, the saying goes, are two countries divided by a common language. There have been numerous occasions during my working life in data analytics when a similar thought has crossed my mind. We have extraordinary power to collect, collate and analyse data and yet we often fail to ask ourselves the most basic of questions: Does this word mean the same thing to everyone?

Take, for example, data aggregation. At PwC we’ve developed the Intelligent Data Framework – an approach to integrating data from multiple sources. This isn’t, of course, a new idea – many organisations are juggling huge amounts of data from many sources and it makes sense that if you can bring it all together, your decisions will be based on richer evidence.

But where our approach is different is in putting data at the centre of the Framework, rather than letting technology lead. It’s often assumed that the best data analytics come out of the best (meaning the most expensive) systems. But that’s not the case at all. Good analytics begins and ends with good data; if you get the data right, you can capitalise on it allowing you to focus your technology needs. The best and most expensive technology in the world is worth nothing if the data it uses is inconsistent or of poor quality. Good data, on the other hand, will make your existing systems all the more valuable.

That means that data is well-governed and most importantly, standardised across the organisation. And that means everyone must speak the same language – in data terms, a common taxonomy.

In the accounting world, the profession has been collaborating for years to produce a standardised taxonomy under which data can be managed, shared and compared – the chart of accounts. In the HR world, though, there’s no equivalent initiative and, so far, no sign of one appearing. So people are making the language up for themselves.

And that can be a real problem within organisations, particularly in siloed companies. It’s not unusual to find two people within the same company using a different definition of ‘head count’, for example. If the definitions aren’t consistent, data analysis will make absolutely no sense.

So under the Intelligent Data Framework we begin with the taxonomy. There’s no short cut – everyone has to sit down together and create a definition of terms that works for their organisation. It’s an iterative, collaborative process but one that’s often forgotten or avoided altogether because it just seems like a drag. When combined with visualisation tools that are now available to play back outputs almost real-time, this gets everyone bought-in on the issues and wanting to fix them.

This is a new way of aggregating data, putting the business and data at the heart. And there are quick gains to be won if you get it right. The result is more consistent data and the confidence that everyone is on the same page, providing the platform for bolder and more insightful decision-making. But most importantly, it makes everyone think far more carefully about data and what it means; the organisation becomes more data conscious. Life’s a lot simpler when we all understand each other.