How risk management turned into a...

In 1946, mathematicians at Los Alamos developed Monte Carlo simulation to model nuclear reactions under uncertainty. By the 1990s, sophisticated mathematical approaches to risk helped win Nobel Prizes and generate billions in financial markets. In 1998, Long-Term Capital Management (LTCM) showed us the limitations of risk models. Today? Many organizations have reduced risk management to colors on a matrix. This isn’t just academic—it’s costing businesses real money through poorer decisions. I’ve spent a decade helping companies move from ritual back to results, and today I’m sharing what actually works.

From profit calculation to parallel universe

So, how did we get from sharp mathematical tools used to place better bets and safeguard fortunes on risky sea voyages, tools designed explicitly to improve financial outcomes, to a situation where risk management often feels disconnected from the core business of making decisions? The origins were intensely practical. Think about the earliest forms of probability theory applied in gambling houses or by maritime insurers navigating treacherous trade routes. There were no compliance departments demanding risk registers; there was simply the cold, hard calculation of odds and potential losses to make a better wager or set an accurate insurance premium. It wasn’t about documenting risks for posterity; it was about survival and profit, using the best available quantitative methods to understand and navigate uncertainty.

Take those early maritime insurers, for instance. They didn’t just vaguely acknowledge that storms posed a risk to shipping. They actively sought data, however imperfect, on shipping lanes, seasons, vessel types, and historical losses. They used this information to calculate the probability of a ship encountering a catastrophic storm and estimated the potential financial loss if it did. This wasn’t an abstract exercise. The result of these calculations directly determined the premium charged for insuring the voyage. A higher calculated risk meant a higher premium, directly influencing the profitability of the insurer and the cost for the merchant. The quantification wasn’t a sidebar; it *was* the mechanism for making the core business decision – how to price the insurance policy to cover potential losses while remaining competitive. The mathematics served the decision, which in turn served the goal of financial success.

This pragmatic, decision-focused approach naturally found fertile ground in the financial sector. Institutions dealing with investments and loans saw the clear value in using sophisticated mathematical tools to improve their own high-stakes choices. When Harry Markowitz developed Modern Portfolio Theory in the 1950s, later refined into the Capital Asset Pricing Model (CAPM) alongside Merton Miller and William Sharpe – work recognized with the Nobel Prize in 1990 – the objective was clear. These models weren’t theoretical playthings; they provided a quantitative framework for understanding the relationship between risk and expected return, directly informing investment selection and asset allocation decisions. They allowed portfolio managers to make more informed choices about which assets to hold, how to balance risk against potential reward, and how to price financial instruments. It wasn’t perfect – the Long-Term Capital Management crisis in 1998 showed the limitations – but the fundamental principle held: sophisticated quantification was adopted *because* it led to demonstrably better, more profitable strategies. Risk analysis was deeply embedded in the process of making money.

Ask RAW@AI about this post or just talk about risk management

But then, as the concepts of risk management began to filter out from these inherently quantitative domains into the broader world of non-financial corporations and government entities, something started to change. The transition wasn’t sudden, but a gradual drift began. Initial drivers often weren’t purely about improving internal decision-making. Instead, external pressures started to subtly shift the focus. Growing regulatory requirements, like those following major corporate scandals, and demands from stock exchanges for better governance disclosures, pushed organizations to *demonstrate* they were managing risk. The audience for risk information began to include auditors, regulators, and boards, whose primary concern was often compliance and oversight rather than the nitty-gritty of optimizing specific operational or strategic decisions.

Compounding this shift was a perception, particularly in non-financial sectors, that their risks were somehow different – fuzzier, less quantifiable, lacking the hard data seen in finance or insurance. Whether it was strategic uncertainty, operational hazards, or project complexities, the argument often surfaced that these areas didn’t lend themselves easily to the rigorous mathematical approaches used elsewhere. This perceived difficulty, or perhaps a lack of readily available skills or internal demand for quantification, provided a convenient justification for moving towards more qualitative, descriptive approaches. Listing risks, categorizing them broadly, and discussing them in workshops felt more accessible, even if it lacked the direct link to decision metrics that defined early risk management. The focus began migrating from *improving the decision itself* through analysis to *documenting and reporting on risks* as a separate activity.

The critical point being missed was that the original ‘science’ of risk management derived its immense value precisely *because* it was tightly integrated with the objective of making superior choices under conditions of uncertainty. Calculating the odds wasn’t just an interesting mathematical exercise; it was the basis for making a better bet, setting a viable premium, or constructing a more resilient investment portfolio. The power wasn’t inherent in the tools themselves, but in their direct application to improve the quality and likely outcomes of specific, consequential decisions. What happened when this vital, practical link between analysis and action was weakened or even severed? The effectiveness wasn’t merely reduced; the entire purpose began to warp.

The value wasn’t in the *activity* labeled ‘risk management’, but in how that activity directly informed and improved specific choices – a fundamental principle that was becoming increasingly lost in translation.

The rise of the risk ritual

This widespread adoption in non-financial spheres led to a distinct pattern: the creation of specialized risk management departments, the drafting of elaborate risk management frameworks, and the implementation of processes that operated largely in parallel to the core activities of the business. Instead of uncertainty analysis becoming part of how strategy was set, budgets were built, or projects were planned, it became a separate function, often housed in a different part of the organization, using its own unique set of tools and language. This separation was the first crucial step away from the integrated, decision-focused origins. Into this new parallel universe flowed a host of qualitative tools – the now-ubiquitous risk matrices or heat maps, painting risks in shades of red, yellow, and green; the reliance on subjective rankings like high, medium, and low; and the meticulously maintained standalone risk registers, often residing in spreadsheets or specialized software, completely disconnected from the financial models used for budgeting or the Gantt charts used for project scheduling. These tools offered an appearance of structure and control, easily presentable and seemingly straightforward.

But why do these seemingly logical tools often represent a *failure* to genuinely grapple with uncertainty, potentially creating a dangerous illusion of control? The problems run deep. Take the common risk matrix, typically plotting likelihood against impact using numbered scales or categories. As Douglas W. Hubbard pointed out extensively in his work, these matrices fundamentally misuse mathematics. They treat ordinal rankings – where categories represent an order, like 1st, 2nd, 3rd, or Low, Medium, High – as if they were interval data, where the distance between points is meaningful and consistent (like temperature scales). Assigning a score of ‘5’ for impact doesn’t mean it’s precisely five times worse than a ‘1’, nor is the difference between a ‘4’ and a ‘5’ necessarily the same as between a ‘1’ and a ‘2’. Multiplying these arbitrary scores to get a “risk score” compounds the error, leading to fundamentally flawed prioritizations. Resources might be channeled towards risks appearing ‘red’ on the map, while mathematically more significant threats, perhaps rated ‘medium’ on both scales but with a much wider range of potential negative outcomes, are relatively ignored. Furthermore, the reliance on qualitative labels like ‘High likelihood’ or ‘Medium impact’ masks the true nature of the uncertainty. What does ‘High’ probability actually mean – 50%? 80%? 99%? What is the financial range of a ‘Medium’ impact? These vague terms are putty in the hands of pervasive cognitive biases, extensively documented by Nobel laureates Daniel Kahneman and Amos Tversky. Our judgments about likelihood and impact are easily swayed by recent events (availability bias), our tendency to seek confirming evidence (confirmation bias), or how the risk is described (framing effect). These qualitative tools don’t mitigate these biases; they often amplify them, leading to assessments based more on gut feel and psychological distortions than on a rational analysis of potential outcomes.

Consider this common scenario: a major capital project, perhaps building a new factory or launching a significant IT system, is proposed. The initial business case relies on optimistic projections for costs, timelines, and benefits. The decision to approve the project moves forward based largely on these optimistic point estimates. *Separately*, perhaps weeks or months later, a risk assessment workshop is held. Participants brainstorm potential risks, rate them using a standard matrix, and produce a colorful heat map. This document might be presented to a steering committee or leadership team, who glance at the distribution of red, yellow, and green squares. But critically, this risk assessment rarely prompts a fundamental re-evaluation of the project’s core financial assumptions or the initial go/no-go decision. The risk exercise happens *after* the key decision, serving as a documentation step rather than an integral input *before* commitment. The optimistic budget and schedule assumptions remain unchallenged by a formal analysis of their potential variance.

Or think about another familiar scene: a company dedicates significant resources – management time, employee hours, potentially external consultant fees – to conducting annual enterprise risk management workshops. Teams diligently populate risk registers, debate likelihood and impact scores, and assign risk owners. These registers are meticulously updated and reported upwards. Yet, when it comes time for crucial strategic decisions – entering a new market, acquiring another company, significantly changing the business model – the process often relies heavily on senior management’s experience, intuition, or strategic vision, with little reference back to the formalized risk register. Key assumptions embedded within the annual budget, like sales growth forecasts or input cost stability, might be simple single-point estimates without any rigorous analysis exploring the range of potential outcomes or the impact of volatility. The risk management process runs on its own track, consuming resources, while the engine of strategic and financial decision-making runs separately, largely uninfluenced by it.

The direct consequence of this disconnect is the rise of ‘risk theater’ – activities that create the appearance of managing risk but do little to actually improve the quality of decisions made under uncertainty. Valuable resources are channeled into bureaucratic exercises: filling templates, attending workshops, generating reports that satisfy compliance checklists or governance requirements. Meanwhile, the real, tangible threats to achieving objectives – the potential variability in the budget (Budget@Risk), the likelihood of missing key deadlines (Schedule@Risk), the range of possible cash flow outcomes (CF@Risk) – remain poorly understood because the analysis isn’t embedded where it matters, within the planning and decision-making processes themselves. Reporting itself morphs into the primary goal. Success becomes measured by the timely submission of the risk register update or the presentation of the heat map, replacing the original, more difficult objective of integrating uncertainty analysis directly into planning cycles, budget formulation, investment appraisals, and strategic choices *before* commitments are made.

This separation isn’t just inefficient bookkeeping; it actively cultivates an environment where poorer decisions are more likely. Organizations fly partially blind, making commitments based on assumptions that haven’t been adequately stress-tested against the inherent uncertainties of the real world. How can organizations break free from this cycle of performative risk management and return to analysis that genuinely informs choices? The comforting ritual of filling matrices and generating reports satisfies procedural and compliance needs, ticking boxes and providing a superficial sense of assurance. But it fundamentally fails to deliver the crucial insights decision-makers require to truly understand and navigate the complex web of uncertainties they face, ultimately leaving significant value exposed and inviting entirely foreseeable failures. Recognizing this gap between ritual and reality is the non-negotiable first step.

The path from ritual back to reality isn’t adding layers; it’s fundamentally reintegrating uncertainty analysis *before* decisions are made. This makes risk management a tool for achieving objectives, not just documenting fears. Take the first step: Download the ‘Guide to Effective Risk Management 3.0’ for practical steps on integrating risk into decision-making and culture. Explore further resources and connect with peers at RISK AWARENESS WEEK: https://2024.riskawarenessweek.com. Stop performing risk rituals and start making risk-based decisions. What’s one key assumption in your next major decision that needs genuine uncertainty analysis before you commit?

Check out other risk management books

RISK-ACADEMY offers online courses

sample85

+ Add to Cart

Informed Risk Taking

Learn 15 practical steps on integrating risk management into decision making, business processes, organizational culture and other activities!


$149,99$29,99



sample85

+ Add to Cart

ISO31000 Integrating Risk Management

Alex Sidorenko, known for his risk management blog http://www.riskacademy.blog, has created a 25-step program to integrate risk management into decision making, core business processes and the overall culture of the organization.


$199,99$29,99



sample85

+ Add to Cart

Advanced Risk Governance

This course gives guidance, motivation, critical information, and practical case studies to move beyond traditional risk governance, helping ensure risk management is not a stand-alone process but a change driver for business.


$795



Leave a Comment