It was around 12-months ago that UK airspace, and that of most of Western Europe, was closed due to an eruption of an unpronounceable volcano in Iceland. You would be forgiven for thinking that someone or something is not happy with Iceland as a further eruption, this time at a different volcano, has enlivened the debate as to whether it all could all happen again, or sitting on the other fence, that it poses no threat and the last response was in fact an over-reaction. Who should we believe?
I think it is worth pausing for a moment to consider the role of scientific advice in a crisis. How can we ensure our strategic decision makers are able to make decisions based on the latest science without becoming overwhelmed by the range of hypothesise and data available?
The key issue that strikes me, having seen how scientific advice helps to inform a crisis response at the national level of UK Government, is that scientists operate in a world of probability whereas senior decision makers want definitive information upon which to base their decision. Of course there is flexibility, what makes a good strategic decision maker is the ability to give direction based on the best information available at the time. But a crisis forces an acceleration of the normal decision-making process, so there will be less time to consider different scenarios before a decision is required.
The pressure on those providing scientific advice can be immense. Often, the crisis is the first time their views will have been sought by decision makers and the pressure to provide advice that conforms to the views of senior management must be significant. Is it right that we ask these individuals to provide advice that may be diametrically opposed to their normal approach to scientific study? During a crisis there is no time to robustly test a hypothesis, or seek peer review from the wider scientific community – why then do we expect these individuals to be definitive with their advice?
I don’t have any immediate answer to these questions. However, perhaps part of the solution would be to involve the scientific community much earlier in the lifecycle of a crisis? By bringing decision makers and scientists together before an event to help define realistic worst case scenarios, risk assessments are likely to be more realistic in their description of probability and impact with greater alignment with an organisation’s risk tolerance. More crucially, by following this approach both parties could share expectations and requirements helping to make the response a great deal more efficient by removing the time required for a discourse on defining scenarios. Of course a decision will always need to be made with the best information available at the time - the challenge in many crises for decision makers is interpreting the myriad of information available to understand which options present the least worst scenario.
By the way, we will be at the CIR Awards this Wednesday (25 May) supporting our colleague Faye Whitmarsh, who has been short-listed for Business Continuity Management Consultant of the Year. We hope to see you there.