THANK YOU FOR SUBSCRIBING
Your organization has a clear mission and a solid strategy, plus a comprehensive risk management program. However, people are constantly reacting to the effects of uncertainty, and even after years of diligent assessment, risk treatment decisions seem arbitrary. Why?
One reason may be a reliance on qualitative analysis.
Global standards such as ISO 31000 and frameworks developed in the public and private sectors provide guidance and techniques. Identifying risk can be time-consuming and complicated, in part because of the sheer volume of business processes and their interdependencies.
"Be forewarned that a move to quantitative analysis is not a “one and done” approach"
However, given time and attention, a typical organization can do a reasonably good job of identifying its risks.
Analyzing the risks that we identify is a different matter. Where identification is a mature process, analysis is a discipline in transition. It is widely understood that what gets measured is what gets managed. Less emphasis is placed on how measurement is done, yet it matters greatly. Across the major standards and publications, we are told that qualitative rankings (such as high, moderate, and low) are at least acceptable as quantification and may even be preferable. Why take time to try and puzzle out the dollar-value of a process or information asset when you can rate it “high”? Why calculate the true probability of a hurricane or a ransomware attack if you can call it “moderate?” That’s the approach that many organizations take, and it goes a long way to explaining why all the time, money, and effort put into risk management doesn’t seem to be able to manage risk at acceptable levels.
Qualitative assessments are highly effective when the goal is to rank-order something within a context. An expert in a given department can reliably say which assets are of high importance and which are less important, and we can trust that judgment. The problem is not a lack of expertise. The problem is what happens when the context shifts. The high-value assets in one business function may not be as important to the organization as those in another. Even if the loss of the assets in each case would cause their respective processes to fail, this does not mean that they have equal value. The value of what those processes accomplish may be different, and rank-ordering needs to adjust for that.
Of course, higher-level experts might rank-order the processes, and those rankings can then provide a context in which to view the internal qualitative rankings. But this creates its own problems. An asset of high importance to a moderately important process has less value than an asset of high importance in a critical process. What about a moderately important asset in a critical process? What would the real impact be if one were affected over the other?
The ambiguity of qualitative assessment is acceptable in narrow contexts, but it expands as qualitative values are aggregated and brought forward in ways that strip away their origins. Expertise removed from its context becomes biased opinion, making aggregate ratings difficult to interpret even at small scale and reducing them to noise as scale expands across large, complex enterprises. By the time that these values are used to create organizational “stoplight” charts and other strategic management indicators, the odds are in favor of significant risks being overshadowed by the effects of aggregation.
So-called “semi-quantitative” analysis is frequently touted as a compromise. However, assigning numeric values to the assessments does nothing to improve the underlying reality. Ten is higher than one, but does ten ranking means “ten times higher,” or are the values merely ordinal? And if ten is meant to be a ten-times factor, has every organizational unit and the business team agreed on the scale? Semi-qualitative assessment cloaks opinions in numeric form. For comparison in context, this can add precision. Stripped of context, the appearance of a mathematical basis promotes false credibility.
To make risk analysis work, we need to value assets in the way that organizations truly value them. That means thinking in terms of dollars, not qualitative rankings. Most business functions are not direct producers of revenue, but every business function should have a purpose whose absence if undone had an annualized impact. Some information assets are essential to those functions, and their value is the value of the functions themselves. Other assets are fractional contributors.
Determining the true value of organizational functions and assets may be time-consuming and complicated, but it can be done. It is worth doing because dollar-value quantification provides a basis for real mathematical calculations. When combined with genuine probability estimates from effective vulnerability assessment and threat intelligence programs, true valuation of processes and assets allows organizations to quantify the long-term loss potential for a range of impacts. It also gives them an understanding of both their short-run maximums and key factors that create exposure. The effort put into getting to this point makes risk treatment decisions simple, weighing localized and overall risk appetites against both potential and likely costs in a given period. Proposals for mitigation strategies or new risk-sharing arrangements can be weighed in direct cost-benefit terms.
Be forewarned that a move to quantitative analysis is not a “one and done” approach. Assets, processes, vulnerabilities, and the overall threat picture all change regularly.
What we need is a commitment to continuous value quantification to complement existing programs for continuous monitoring. It will not be simple or easy to do—but it is the path that leads us to the best outcomes for managing risk.