Nudging better security
21 October 2016
By Max Klugerman
Recently my colleagues Daisy McCartney and Edward Starkie in our Cyber Security Culture Team posted blogs about the important role that culture and employee behaviours play in supporting security objectives and the importance of storytelling in security engagement (click here to read: How to create a security conscious workforce, why it is about more than just awareness… and How to use storytelling to create a culture of cyber security engagement). In this post I'd like to talk about how behavioural economics could help maintain security in your organisation, to build your ‘human firewall’. Behavioural economics has been proven to be effective in changing behaviour through small (and therefore low cost) changes to the decision environment. Could it be possible to influence secure behaviour in the moments that matter by subtle changes to the employee environment rather than reducing free choice - to nudge them in the right direction?
What is ‘behavioural nudging’
Psychologists Daniel Kahneman and Amos Tversky famously noted how the framing of decisions can affect choice. For example, individuals will choose differently between receiving £50 or receiving £100 and then having £50 taken away from them, despite the similar outcome – we are loss averse. Beyond framing and loss aversion, there are many mental shortcuts (known as heuristics) which affect decision making. For example, we often interpret or focus on data which confirms our preconceptions - this is known as confirmation bias; or we may do or believe something because many others do - this is known as the bandwagon effect.
This understanding of heuristics has had many applications, with one being the use of behavioural ’nudges’, or small changes to the environment. This works by understanding and using the heuristics described above to influence behaviour - essentially, nudges aim to make a certain decision more attractive or easier to make but without compromising freedom of choice. One commonly cited example of a nudge is the use of default options – e.g. making saving for a pension an opt-out arrangement rather than opt-in can drastically increase the likelihood of saving; as even though individuals are free not to contribute, a behavioural heuristic deters them from changing from the status quo.
So what might the implication be for security? Enterprise security is an area where desired behaviours are not always seen. Despite awareness programmes, how many employees challenge tailgaters, lower security standards at home, or fail to report a security concern? The use of nudge theory can provide ideas for changes to choice architecture which could encourage secure behaviour without a large cost.
Let's take reporting a security incident or concern such as a suspicious email or instance of tailgating – often, reporting of such things is much lower than what we would want. If we think about why, we could probably find certain frictions in many organisations – it’s too difficult to find the reporting form (an ability issue), or individuals downplay the importance of reporting (a motivation issue).
- One behavioural intervention to address difficulty could be to add a permanent button to a reporting form on each individual’s desktop. Alternatively, the organisation's IT Helpline could be augmented to give the option of reporting a security breach as the first option.
- A nudge may appeal to employees' social nature and biases, such as using campaigns which emphasise how frequently others are reporting security incidents, thereby triggering a bandwagon effect which encourages reporting.
- Another option our clients are exploring is gamification - making desired security behaviours fun or competitive could be encouraged if legitimate security incident reporting feeds into leader boards or points. We’ll be discussing gamification in an upcoming blog in this series.
Of course it’s not always going to be simple to nudge security. The approach we use involves trialling nudges in specific business areas, and comparing impact against controls groups who are left untouched. This is to allow for any tweaks or re-designs which may be required, depending on these results. For instance, the example above which communicates how frequently others are reporting security incidents could have a perverse effect as individuals think that they don’t have to report security incidents because others will do it for them. Testing will allow such effects to be observed and when success is found, it can be replicated across your organisation.
Security nudging is one tool for encouraging good security behaviour, and should sit alongside efforts to challenge beliefs and underlying assumptions. These assumptions are key to changing security behaviours ("I wouldn't get phished", "I don't need to stop that person I don't recognise"). As one tool in the box, nudges can be simple, and cheap, and are proven to have big fast acting impacts when they effectively tap into the decision environment.
Can you think of small changes to choice architecture which could encourage the secure behaviours your organisation needs?
By Max Klugerman