Recently I led a webinar on the lessons from corporate failure for the UK’s Institute of Directors. The financial crisis of 2008 is much in the news as its ten-year anniversary approaches. Yet the failure of Long Term Capital Management almost exactly 20 years ago offers more valuable lessons for corporations of all types.
The first of these is that both exogenous factors and endogenous factors, mostly in combination, create existential strategic threats. Technology is a fundamental driver of change which fosters the uncertainty that underpins existential threat. In the case of LTCM technology played a role both outside and within the organisation.
The financial markets of the late 90s had seen exponential growth in the use of derivatives – tradable products that derive their value from underlying assets rather than being based directly upon them, such as interest rate swaps rather than corporate bonds. This growth in turn created an astronomical number of new linkages in the global financial system, which in turn drove an exponential growth in its complexity. Such complex systems inevitably contain multiple, hard to discern feedback loops and often respond in a non-linear way to perturbations, i.e. the whole market may respond in an extreme fashion to relatively modest stimuli.
Across global markets, developments in IT and communications enabled more sophisticated, faster and extremely high-volumes of derivatives trades to develop. LTCM was an exploiter of these trends – formed by experienced traders and highly respected academics who believed that using sophisticated computer models, backed by the theories of Nobel prize winning economists, was a sound basis for a winning competitive strategy.
In the face of uncertainty, all strategies are based on assumptions. While no judgement about the future can be proved in the present, some critical assumptions may contain the seeds of strategic failure. In LTCM’s case one of these was the assumption that across the global financial system returns on different asset classes would always be relatively unrelated (uncorrelated), meaning that a wide spread of trades would reduce the risk of failure. This assumption proved to be fatally flawed when it became clear that among LTCM’s counterparties co-investment in all classes created links in times of stress, i.e. discrete asset classes became highly correlated due to increased copying behaviour of investors when uncertainty spiked. This is another example of how the interactions of agents in a complex adaptive system can produce emergent behaviour that is extremely difficult to anticipate.
Another of LTCM’s assumptions, hard-wired into its computer models, was that historical patterns were a reliable predictor of volatility. Moreover, they assumed that for calculation of risk, volatility was normally distributed about the historic mean. This appeared to be reasonable during the early years of LTCM’s existence, when the global markets were in a period of relative stability, but was shown to be catastrophically in error when crises in emerging markets and Russia’s debt default provoked instability in the complex global system. Yet even without these exogenous triggers, LTCM’s assumptions were fatally flawed. They assumed that it was a practical impossibility for the distribution of volatility to deviate from the normal distribution on which they based their risk management calculations. However, LTCM’s models were based on limited data. In fact, volatility beyond the range they assumed had already been observed in 1987 and 1992.
LTCM relied heavily on their computed probability of risk, according to their model assumptions, assessing their maximum possible loss on any single day as $45m or about 1% of their capital. They assessed a negligible chance of >40% loss of capital in one month. They were confident that the probability of losing all their capital in one year was 1 in 10 to the power 24 – a number so tiny it amounted to saying they did not expect to lose all their capital over a timescale of greater than the remaining life of the entire Universe!
But on a single day, August 21st 1998, they lost $553m – and in that month lost $1.9 billion, or 45% of LTCM’s capital. The remainder was subsequently lost in five more weeks.
LTCM made a fundamental error in their risk assessments - assigning quantified probabilistic assessment to uncertainty. As John Maynard Keynes observed as long ago as 1937 “Risk is based on the assumption of a well-defined and constant objective probability distribution which is known and quite possible. Uncertainty has no scientific basis on which to form any calculable probability”.
The LTCM business model was also based on the use of extreme leverage (debt) to produce “supra-normal returns”. Even when the company was reporting its long run of apparently market beating profitability, without the aggressive use of leverage, the underlying returns on capital were only about 1%. Had these truly been “risk adjusted” returns they would have been much, much lower.
This strategy of extreme leverage also dramatically increased the potential existential threat were a liquidity crisis to occur, i.e. the inability to quickly sell assets to raise cash. This threat became reality when the global market sentiment turned against perceived risky assets of all types.
There was also huge asymmetry present in the timescales associated with the strategic risk and uncertainty that LTCM faced. The company built up a large position in so called “Equity Volatility” trades; in essence a bet that volatility on specific equities would regress to the mean over time. However, the time required to realise the potential upside in these trades was up to five years, whereas the timescale for a liquidity crisis to develop was measured in days or weeks.
When complex system experiences state changes – periods of enhanced volatility – positive feedback loops are always in evidence. In the case of LTCM one of these worked as follows:
As prices for risky assets fell, losses on positions on these assets, including derivatives, accelerated, especially as LTCM was very highly leveraged
LTCM needed to close out their position, i.e. sell to raise cash (e.g. for margin calls) on their risky trades
Selling was more difficult as the market became illiquid; potential buyers of LTCM positions were themselves selling, they didn’t want more risk, they wanted less
Prices in risky assets fell further and faster in an accelerating trend
LTCM needed to sell even more etc. etc.
The partners at LTCM believed that the underlying logic in their trading strategy was correct and that the markets were behaving irrationally. LTCM partners monitored their own risk – thus they were subject to all the individual and group biases that blinded them to the true state of the errors in their assumptions, risks they were assuming due to their business model and strategy and the nature of the uncertainties arising from the complex system in which they operated.
As Keynes once again famously observed “The markets can remain irrational for longer than you can remain solvent”. When the end came for LTCM it came with “Stunning swiftness – in mid-August 1998 the partners still believed they were masters of a hugely successful enterprise, capitalized at $3.6bn. It took 5 weeks to lose it all” (from When Genius Failed: The Rise and Fall of Long Term Capital Management, Roger Lowenstein, Fourth Estate, London, 2001.
Lessons for all
It might be tempting to observe the history of LTCM as being an extraordinary example, a victim of a unique set of circumstances in a highly specialised sub-sector of a sub-sector of the financial services industry. But that would be a mistake. LTCM contains lessons for decision makers in all businesses, in all sectors, at all times.
The proximate cause of failure at LTCM was running out of cash – nearly always true in corporate failures as cash is literally the lifeblood of business – but the root causes were due to the combination of exogenous and endogenous factors.
Technology development always drives economic and market changes that frequently increase greater global connectivity and hence complexity, bringing with it increased non-linearity and thus greater uncertainty and “whole system” risk, where volatility is only normally distributed during periods of relative stability of the whole system and never normally distributed during periods of rapid change.
The assumptions made in adopting any given strategy and/or business model can drive strategic risk up significantly. Moreover, such core assumptions are far too often adopted as fact and neither challenged as to their initial veracity nor tested over time to determine how they may be becoming invalid.
Probabilistic calculations applied to uncertainty, especially subjectively based, are not just wrong but dangerous. The standard “probability times impact” of Risk Registers, when applied to uncertainties and strategic risk are not only inappropriate but give false assurance that such potential threats are “controlled”.
When an existential threat materialises, it happens with “Stunning swiftness” – it is nearly always a surprise. The possibility of surviving such a threat is significantly limited if the time to the threshold of a terminal event is less than the time it would take to implement mitigating actions. Thus time, not a meaningless probability, is the key parameter to assess in the context of existential strategic risk.
Individual, group and organisational biases will foster “familiarity with imperfect information” – risk blindness. Countering this tendency requires formal board level strategic risk governance processes, preferably independently facilitated to address the unavoidable biases inherent in “self-monitoring”.
For us, one of the most attractive aspects of Meltdown is that authors Chris Clearfield and Andras Tilscik explicitly confront both complexity and human nature as critical causes of system failure.
They begin with the observation that, as a wide variety of “systems have become more capable, they have also become more complex and less forgiving, creating an environment where small mistakes can turn into massive failures.”
The authors draw on Charles Perrow’s classic 1984 book “Normal Accidents” to explain why this is the case.
As we emphasize in our work at Britten Coyne Partners, Perrow noted how complex systems have many overlapping cause-effect relationships, which in many cases are characterized by non-linearity and/or time delays. In such systems, apparently small causes can have very disproportionate effects.
Perrow further noted how the presence or absence of “tight-coupling” between various parts of a complex system sets the stage for cascading non-linear effects that quickly outpace human operators’ and managers’ ability to understand what is happening and react appropriately.
As Clearwater and Tilczik observe, system “complexity and coupling create a danger zone, where small mistakes turn into meltdowns.”
One obvious approach is to mitigate this risk by reducing a system’s complexity and loosen the coupling between its component parts (i.e., by adding more slack). Unfortunately, the authors note that, “in recent decades, the world has actually been moving in the opposite direction”, with more systems now operating in the danger zone.
Indeed, as we have often noted, that is one of the key consequences of the dramatic increase in connectivity wrought by the internet, as well as increased automation of many decisions and processes.
After providing painful illustrations of various meltdowns that have occurred in recent years, across a wide range of increasingly complex and tightly coupled systems (from the failures of Knight Trading to Deepwater Horizon), the authors arrive at what for us is the book’s most important conclusion: “As our systems change, so must our ways of managing them.”
Too often, organizations focus on risks that are easy to observe, measure, and manage (e.g., “slips, trips, and falls”), rather than the complex uncertainties that pose the most dangerous threats to their survival.
Clearfield and Tilczik then review a number of steps organizations can take to reduce the risk of meltdowns. These included solutions that are familiar to Britten Coyne Partners’ clients, such as Pre-Mortem analyses and other techniques for surfacing dissenting views, and for being alert to the advance warnings of disaster that complex systems often provide in the form of anomalies, near misses, and other surprises that we ignore at our peril.
A powerful and widely overlooked point made by the authors is that, "in the age of complex systems, diversity is a great risk management tool." As they note, diverse teams are less likely to be plagued by the dangers of excessive conformity: "in diverse groups, we don’t trust each other’s judgment quite as much…Everyone is more skeptical…Diversity makes us work harder and ask tougher questions.”
We share Clearwater and Tilczik's final conclusion that putting the solutions they recommend into practice can be tough. Like them, however, we also know from experience that it is not impossible, and have seen the exceptional benefits that that make the extra effort involved well worthwhile.
Unfortunately, far too many people come away from these exercises feeling frustrated that they have generated a laundry list of issues and related opportunities and threats, but little else.
Having been in those shoes many times over the years, at Britten Coyne we set out to develop a better methodology. Our primary goal was to enable clients to develop an integrated mental model that would enable them to maintain a high degree of situation awareness about emerging threats in a dynamically evolving complex adaptive system – i.e., the confusing world in which they must make decisions.
Our starting point was the historical observation that the causes that give rise to strategic threats do not operate in isolation from one another. Rather, they tend to unfold and accumulate in a rough chronological order, albeit with time delays and feedback loops between them that often produce non-linear effects.
As the economist Rudiger Dornbusch famously observed, “a crisis takes a much longer time coming than you think, and then it happens much faster than you would have thought.” The ancient Roman philosopher Seneca observed this same phenomenon far earlier, noting that, “fortune is of sluggish growth but ruin is rapid.”
The following chart highlights that the changes we observe in different areas at any point in time are actually part of a much more complex and integrated change process.
In our experience, you develop situation awareness about the dynamics of this system by asking these three questions about each issue area:
- What are the key trends and uncertainties – i.e., those that could have the largest impact in terms of the threats they could produce (or, from a strategy perspective, the opportunities)?
- What are the key stocks and flows within each area? This systems dynamics approach focuses attention on one of the key drivers of non-linear change in complex adaptive systems – accumulating or decumulating stocks (e.g., levels of debt or inequality, or the capability of a technology) that reach and then exceed the system's carrying capacity. While media attention typically focuses on flows (e.g., an annual earnings report or this year’s government deficit), major discontinuities in the state of a complex adaptive system are often caused by stocks reaching a critical threshold or tipping point.
- What are the key feedback loops at work, both within each area, and between them? Positive feedback loops are especially important, as they can cause flows to rapidly accelerate and quickly trigger substantial nonlinear effects both within a given issue area and often across others as well.
With a better (but certainly not perfect) understanding of the key elements in a complex adaptive system, and the relationships between them, you are in a far better position to identify potential discontinuities (and their cascading impacts over time) in order to develop more accurate estimates of how the system of interest could evolve in the future, and the threats and opportunities it could produce.
To be sure, another feature of complex adaptive systems is that due to their evolutionary nature, forecast accuracy tends to decline exponentially as the time horizon lengthens. The good news, however, is that as many authors have shown (e.g., “How Much Can Firms Know?” by Ormerod and Rosewell), even a little bit of foresight advantage can confer substantial benefits when an individual, organization, or nation state is competing in a complex adaptive system.
Question #1: What risks are included? And more important, what's missing?
The Risk Registers I've seen over the years are generally long risks whose likelihood and potential impact are easy to quantify, and short those that are not. Moreover, the easier a risk is to identify (i.e., discrete risk events) and quantify, the easier it is to price and transfer, via insurance or financial derivative markets. Hence this also makes it easy to identify and quantify the impact of risk mitigation options. Unfortunately, uncertainties that represent true existential threats to companies survival typically don't meet these tests.
How many Risk Registers include risks to the growth rate and size of served markets, or to the strength of a company's value proposition within those markets, or to the sustainability of its business model's economics, or the health of its innovation processes? Because these are usually true uncertainties, rather than easily quantified risks, too many Risk Registers fail to include them, or do so in a manner that is far too generic.
Question #2: Do Risk Likelihood and Risk Impact capture what really kills companies?
Think about all the stories you've read or heard about how different companies failed. What is perhaps the most common plot line you hear? In our experience, it is this: "They waited too long to act."
This brings us to the most glaring omission from the Risk Register concept: Time Dynamics. Our education and consulting work with clients focuses on three issues: (1) early anticipation of emerging threats; (2) their accurate assessment; and (3) adapting to them in time. We stress the need to estimate the rate at which a new threat is developing, and the time remaining before it reaches a critical threshold.
In light of this, it isn't enough to simply develop "mitigation actions" or "adaptation options." You also need to estimate how long it will take (and how much it will cost) to put them in place, and the likelihood they will be sufficient to adequately respond to the threat (at minimum, this means keeping the company from failing because of the new threat).
Unfortunately, few Risk Registers tell you anything about time dynamics. Instead, they focus on the likelihood a threat will develop, but usually don't discuss what "develop" means in terms of a specific threshold and time period.
Question #3: Will those mitigation actions really reduce the potential risk impact?
Many Risk Registers significantly reduce the potential negative impact of different risks by netting them against the presumed benefits of various risk mitigation options. This can make them look far less dangerous than they really are.
In too many cases, however, little or no detail is given about how long those mitigation actions will take to implement, how much they will cost, where they stand today, their chances of success, and the range of possible positive impacts they could have if the risk actually materializes. Rather, these Risk Registers blithely assume that the legendary "Risk Mitigation Cavalry" can be counted on to ride over the hill in time to save the day. Too many non-executive directors have learned the hard way that believing in this story without asking tough questions about it can turn out to be a very costly decision.
“Heeded my words not, did you? ‘Pass on what you have learned.’ Strength, mastery, hmm...but weakness, folly, failure, also. Yes, failure, most of all. The greatest teacher, failure is.”
At Britten Coyne Partners, we could not agree more with Yoda. And with that in mind, we offer you this summer reading list of some of our favorite books about failure (from individual to organizational to societal), and the many lessons it can teach us.
· “The Logic of Failure”, by Dietrich Dorner
· “Normal Accidents”, by Charles Perrow
· “Flirting with Disaster", by Gerstein and Ellsberg
· “The Field Guide to Understanding Human Error”, by Sidney Dekker
· “Meltdown”, by Clearfield and Tilcsik
· "Inviting Disaster”, by James Chiles
· “Why Decisions Fail”, by Paul Nutt
· “Why Most Things Fail”, by Paul Ormerod
· “The Limits of Strategy”, by Ernest von Simson
· “How the Mighty Fall”, by Jim Collins
· “Surprise Attack”, by Richard Betts
· “Surprise Attack”, by Ephraim Kam
· "Pearl Harbor: Warning and Decision", by Roberta Wohlstetter
· "Why Intelligence Fails", by Robert Jervis
· “Military Misfortunes”, by Eliot Cohen and John Gooch
· “This Time Is Different”, by Reinhart and Rogoff
· “Irrational Exuberance”, by Robert Shiller
· “Manias, Panics, and Crashes”. by Charles Kindleberger
· “Crashes, Crises, and Calamities”, by Len Fisher
· “The Upside of Down”, by Thomas Homer Dixon
· “Understanding Collapse”, by Guy Middleton
· “Why Nations Fail”, by Acemoglu and Robinson
· “The Rise and Fall of the Great Powers”, by Paul Kennedy
· “The Rise and Decline of Nations”, by Mancur Olson
· “The Collapse of Complex Societies”, by Joseph Tainter
· “The Seneca Effect”, by Ugo Bardi