Recently I led a webinar on the lessons from corporate failure for the UK’s Institute of Directors. The financial crisis of 2008 is much in the news as its ten-year anniversary approaches. Yet the failure of Long Term Capital Management almost exactly 20 years ago offers more valuable lessons for corporations of all types.
The first of these is that both exogenous factors and endogenous factors, mostly in combination, create existential strategic threats. Technology is a fundamental driver of change which fosters the uncertainty that underpins existential threat. In the case of LTCM technology played a role both outside and within the organisation.
The financial markets of the late 90s had seen exponential growth in the use of derivatives – tradable products that derive their value from underlying assets rather than being based directly upon them, such as interest rate swaps rather than corporate bonds. This growth in turn created an astronomical number of new linkages in the global financial system, which in turn drove an exponential growth in its complexity. Such complex systems inevitably contain multiple, hard to discern feedback loops and often respond in a non-linear way to perturbations, i.e. the whole market may respond in an extreme fashion to relatively modest stimuli.
Across global markets, developments in IT and communications enabled more sophisticated, faster and extremely high-volumes of derivatives trades to develop. LTCM was an exploiter of these trends – formed by experienced traders and highly respected academics who believed that using sophisticated computer models, backed by the theories of Nobel prize winning economists, was a sound basis for a winning competitive strategy.
In the face of uncertainty, all strategies are based on assumptions. While no judgement about the future can be proved in the present, some critical assumptions may contain the seeds of strategic failure. In LTCM’s case one of these was the assumption that across the global financial system returns on different asset classes would always be relatively unrelated (uncorrelated), meaning that a wide spread of trades would reduce the risk of failure. This assumption proved to be fatally flawed when it became clear that among LTCM’s counterparties co-investment in all classes created links in times of stress, i.e. discrete asset classes became highly correlated due to increased copying behaviour of investors when uncertainty spiked. This is another example of how the interactions of agents in a complex adaptive system can produce emergent behaviour that is extremely difficult to anticipate.
Another of LTCM’s assumptions, hard-wired into its computer models, was that historical patterns were a reliable predictor of volatility. Moreover, they assumed that for calculation of risk, volatility was normally distributed about the historic mean. This appeared to be reasonable during the early years of LTCM’s existence, when the global markets were in a period of relative stability, but was shown to be catastrophically in error when crises in emerging markets and Russia’s debt default provoked instability in the complex global system. Yet even without these exogenous triggers, LTCM’s assumptions were fatally flawed. They assumed that it was a practical impossibility for the distribution of volatility to deviate from the normal distribution on which they based their risk management calculations. However, LTCM’s models were based on limited data. In fact, volatility beyond the range they assumed had already been observed in 1987 and 1992.
LTCM relied heavily on their computed probability of risk, according to their model assumptions, assessing their maximum possible loss on any single day as $45m or about 1% of their capital. They assessed a negligible chance of >40% loss of capital in one month. They were confident that the probability of losing all their capital in one year was 1 in 10 to the power 24 – a number so tiny it amounted to saying they did not expect to lose all their capital over a timescale of greater than the remaining life of the entire Universe!
But on a single day, August 21st 1998, they lost $553m – and in that month lost $1.9 billion, or 45% of LTCM’s capital. The remainder was subsequently lost in five more weeks.
LTCM made a fundamental error in their risk assessments - assigning quantified probabilistic assessment to uncertainty. As John Maynard Keynes observed as long ago as 1937 “Risk is based on the assumption of a well-defined and constant objective probability distribution which is known and quite possible. Uncertainty has no scientific basis on which to form any calculable probability”.
The LTCM business model was also based on the use of extreme leverage (debt) to produce “supra-normal returns”. Even when the company was reporting its long run of apparently market beating profitability, without the aggressive use of leverage, the underlying returns on capital were only about 1%. Had these truly been “risk adjusted” returns they would have been much, much lower.
This strategy of extreme leverage also dramatically increased the potential existential threat were a liquidity crisis to occur, i.e. the inability to quickly sell assets to raise cash. This threat became reality when the global market sentiment turned against perceived risky assets of all types.
There was also huge asymmetry present in the timescales associated with the strategic risk and uncertainty that LTCM faced. The company built up a large position in so called “Equity Volatility” trades; in essence a bet that volatility on specific equities would regress to the mean over time. However, the time required to realise the potential upside in these trades was up to five years, whereas the timescale for a liquidity crisis to develop was measured in days or weeks.
When complex system experiences state changes – periods of enhanced volatility – positive feedback loops are always in evidence. In the case of LTCM one of these worked as follows:
As prices for risky assets fell, losses on positions on these assets, including derivatives, accelerated, especially as LTCM was very highly leveraged
LTCM needed to close out their position, i.e. sell to raise cash (e.g. for margin calls) on their risky trades
Selling was more difficult as the market became illiquid; potential buyers of LTCM positions were themselves selling, they didn’t want more risk, they wanted less
Prices in risky assets fell further and faster in an accelerating trend
LTCM needed to sell even more etc. etc.
The partners at LTCM believed that the underlying logic in their trading strategy was correct and that the markets were behaving irrationally. LTCM partners monitored their own risk – thus they were subject to all the individual and group biases that blinded them to the true state of the errors in their assumptions, risks they were assuming due to their business model and strategy and the nature of the uncertainties arising from the complex system in which they operated.
As Keynes once again famously observed “The markets can remain irrational for longer than you can remain solvent”. When the end came for LTCM it came with “Stunning swiftness – in mid-August 1998 the partners still believed they were masters of a hugely successful enterprise, capitalized at $3.6bn. It took 5 weeks to lose it all” (from When Genius Failed: The Rise and Fall of Long Term Capital Management, Roger Lowenstein, Fourth Estate, London, 2001.
Lessons for all
It might be tempting to observe the history of LTCM as being an extraordinary example, a victim of a unique set of circumstances in a highly specialised sub-sector of a sub-sector of the financial services industry. But that would be a mistake. LTCM contains lessons for decision makers in all businesses, in all sectors, at all times.
The proximate cause of failure at LTCM was running out of cash – nearly always true in corporate failures as cash is literally the lifeblood of business – but the root causes were due to the combination of exogenous and endogenous factors.
Technology development always drives economic and market changes that frequently increase greater global connectivity and hence complexity, bringing with it increased non-linearity and thus greater uncertainty and “whole system” risk, where volatility is only normally distributed during periods of relative stability of the whole system and never normally distributed during periods of rapid change.
The assumptions made in adopting any given strategy and/or business model can drive strategic risk up significantly. Moreover, such core assumptions are far too often adopted as fact and neither challenged as to their initial veracity nor tested over time to determine how they may be becoming invalid.
Probabilistic calculations applied to uncertainty, especially subjectively based, are not just wrong but dangerous. The standard “probability times impact” of Risk Registers, when applied to uncertainties and strategic risk are not only inappropriate but give false assurance that such potential threats are “controlled”.
When an existential threat materialises, it happens with “Stunning swiftness” – it is nearly always a surprise. The possibility of surviving such a threat is significantly limited if the time to the threshold of a terminal event is less than the time it would take to implement mitigating actions. Thus time, not a meaningless probability, is the key parameter to assess in the context of existential strategic risk.
Individual, group and organisational biases will foster “familiarity with imperfect information” – risk blindness. Countering this tendency requires formal board level strategic risk governance processes, preferably independently facilitated to address the unavoidable biases inherent in “self-monitoring”.