Over the past ten years I have observed a disconnect that I continue to find fascinating.
Many organisations determine their business continuity risk exposures based largely upon predicted financial impacts. These calculations may include reputation and other facets which are in turn estimated in a USD or GBP value. The results help the organisation prioritise their response and often lead to investment choices. These are all predictions.
So, if an organisation has gone through the trouble of predicting costs, why don't they validate their predictions by calculating the cost of disruptions when they occur? It is like setting a sales target but not adding up what was sold - you just wouldn't do that.
I have a few ideas as to why this information is not collected. Even though big incidents get attention, management are usually so pleased to have recovered and so caught up in heralding their success that the cost of the impact is at best forgotten, at worst purposely avoided. Smaller but frequent incidents often get ignored, passed over as part of the background challenges that business face. Stressed teams think that it will be a mammoth task to calculate the costs so it is put on the “too difficult” pile.
It does not have to be a difficult task. It depends entirely on the organisation but for some, simple calculations would be all that is needed as the order of magnitude is more important than exact figures. For some it may be enough to use number of employees * hours of interruption * average employee salary = financial impact. Average sales during that period may be right for others. How about using the same calculation used to derive the risk assessment rating (assuming it was more than a guess)?
There are good reasons why I advocate this calculation. From my experience, one company that faced frequent but short duration power outages began adding up the costs of the outages and were able to compare that to the costs of commissioning a new substation to provide them with a more stable supply. Another was able to examine frequent IT system failures (admittedly they had to use a slightly more complex calculation to show a % productivity impact as they didn't lose all functionality) and determined that the costs of new equipment to prevent the failures was a fraction of the annual lost productivity. Building resilience and adding preventative measures are basic tools of successful Business Continuity / Disaster Recovery Management and using this information allows the organisation to better recognise savings.
I realise that this suggestion won't change the world but it should lead to a better understanding of how organisations are impacted by real incidents, will help inform the risk analysis process and direct investment to the types of incidents where the cost/benefit analysis yields the best results. I once suggested this approach to a head of BC and his initial reaction was to provide a long list of big companies that he worked at that didn't do the calculation - as if that was reason in itself to continue with his head in the sand, too scared to look. There are those that accept the idea more readily. I recall the first time I provided to executive management the calculation of lost productivity from a major incident in excess of $1 million - his expression was priceless.