RAND’s New Analysis of Strategic Warning Challenges Facing the Intelligence Community Applies to the Private Sector Too

As our clients know, Britten Coyne Partners’ methodologies draw on a wide variety of sources, including the military and intelligence communities. In this blog post we will summarize important insights from RAND Corporation’s new analysis, “Perspectives and Opportunities in Intelligence for US Leaders.”

Chapter 2 is titled, “
Reconstituting Strategic Warning for the Digital Age.” RAND begins by noting that, “since 2014, events have raised questions about both the Intelligence Community’s (IC’s) strategic warning effectiveness and the policy community’s understanding of warning and its ability to command action in response.”

They then make a point that we have often made ourselves: “The strategic warning mission is bedeviled by two inherent challenges…In slowly developing situations it is often hard to stimulate action. The second challenge is how to alert policymakers to something that has rarely, if ever, been seen before…The inherent challenge in providing insight to policymakers about future developments is in making sure that warning is heeded, but does not cause undue alarm.”

RAND also cites well-known CIA analyst Jack Davis’ famous quote on another aspect of the warning challenge: “Waiting for evidence the enemy is at the gate usually fails the timeliness test; prediction of potential crises without hard evidence can fail the credibility test.”

In sum, “warning is fundamentally a problem of both sensemaking and effectively communicating insights to busy policymakers – an inherently difficult challenge.”

In today’s world, the warning challenge has become exponentially greater because of the speed with which events often develop our complex and densely interconnected world.

As RAND notes, today the IC is “frequently confronting rapidly evolving situations that have never been seen before.” Under these conditions, effective “warning is dependent on much more than simply having experts who know the issues associated with the topic. Experts in a field tend to see trends based on what they have seen before, and they have difficulty imagining discontinuities that surprise…Experts are naturally wedded to their previous assessments” (a point also made by Philip Tetlock in his excellent book on “
Expert Political Judgment: How Good Is It?”).

Given this, RAND concludes that, “expertise has to be complemented with a diversity of views", from internal and external sources. To which we can only add, “Amen.”

It is also unlikely that artificial intelligence and other technologies will ever be able to replace the human beings when it comes to sensemaking in highly complex situations, particularly when the challenge is to make accurate estimates of the ways they could evolve in the future. At best, such technologies may increasingly be able to augment human cognition.

In summing up the nature of the strategic warning challenge facing the intelligence community, the Director of National Intelligence’s strategic plan states that, “our anticipated strategic environment models closely on chaos theory: initial conditions are key, trends are non-linear, and challenges emerge suddenly due to unpredictable systems behavior…We believe our customers will seek out inputs on what may surprise them, if we are capable of placing such inputs in a larger context and demonstrating rigor in our analytic approach to complexity.”

RAND concludes that “a new warning tradecraft that combines indicators with techniques to test assumptions, identify event drivers, consider plausible alternative explanations and outcomes, and aggregate expert forecasts provides a good foundation, but it must be applied throughout the IC to achieve fundamental change in the current approach to warning.”

We believe that the points raised by RAND are just as applicable to the strategic risk management, governance, and warning challenges faced today by private sector corporations as they are to those facing the Intelligence Community. At Britten Coyne Partners, our mission is to help organizations successfully meet them.
Comments

Lessons from the Failure of Long Term Capital Management, 20 Years Later

Recently I led a webinar on the lessons from corporate failure for the UK’s Institute of Directors. The financial crisis of 2008 is much in the news as its ten-year anniversary approaches. Yet the failure of Long Term Capital Management almost exactly 20 years ago offers more valuable lessons for corporations of all types.

Exogenous factors

The first of these is that both exogenous factors and endogenous factors, mostly in combination, create existential strategic threats. Technology is a fundamental driver of change which fosters the uncertainty that underpins existential threat. In the case of LTCM technology played a role both outside and within the organisation.

The financial markets of the late 90s had seen exponential growth in the use of derivatives – tradable products that derive their value from underlying assets rather than being based directly upon them, such as interest rate swaps rather than corporate bonds. This growth in turn created an astronomical number of new linkages in the global financial system, which in turn drove an exponential growth in its complexity. Such complex systems inevitably contain multiple, hard to discern feedback loops and often respond in a non-linear way to perturbations, i.e. the whole market may respond in an extreme fashion to relatively modest stimuli.

Across global markets, developments in IT and communications enabled more sophisticated, faster and extremely high-volumes of derivatives trades to develop. LTCM was an exploiter of these trends – formed by experienced traders and highly respected academics who believed that using sophisticated computer models, backed by the theories of Nobel prize winning economists, was a sound basis for a winning competitive strategy.

Endogenous factors

In the face of uncertainty, all strategies are based on assumptions. While no judgement about the future can be proved in the present, some critical assumptions may contain the seeds of strategic failure. In LTCM’s case one of these was the assumption that across the global financial system returns on different asset classes would always be relatively unrelated (uncorrelated), meaning that a wide spread of trades would reduce the risk of failure. This assumption proved to be fatally flawed when it became clear that among LTCM’s counterparties co-investment in all classes created links in times of stress, i.e. discrete asset classes became highly correlated due to increased copying behaviour of investors when uncertainty spiked. This is another example of how the interactions of agents in a complex adaptive system can produce emergent behaviour that is extremely difficult to anticipate.

Another of LTCM’s assumptions, hard-wired into its computer models, was that historical patterns were a reliable predictor of volatility. Moreover, they assumed that for calculation of risk, volatility was normally distributed about the historic mean. This appeared to be reasonable during the early years of LTCM’s existence, when the global markets were in a period of relative stability, but was shown to be catastrophically in error when crises in emerging markets and Russia’s debt default provoked instability in the complex global system. Yet even without these exogenous triggers, LTCM’s assumptions were fatally flawed. They assumed that it was a practical impossibility for the distribution of volatility to deviate from the normal distribution on which they based their risk management calculations. However, LTCM’s models were based on limited data. In fact, volatility beyond the range they assumed had already been observed in 1987 and 1992.

LTCM relied heavily on their computed probability of risk, according to their model assumptions, assessing their maximum possible loss on any single day as $45m or about 1% of their capital. They assessed a negligible chance of >40% loss of capital in one month. They were confident that the probability of losing all their capital in one year was 1 in 10 to the power 24 – a number so tiny it amounted to saying they did not expect to lose all their capital over a timescale of greater than the remaining life of the entire Universe!

But on a single day, August 21
st 1998, they lost $553m – and in that month lost $1.9 billion, or 45% of LTCM’s capital. The remainder was subsequently lost in five more weeks.

LTCM made a fundamental error in their risk assessments - assigning quantified probabilistic assessment to uncertainty. As John Maynard Keynes observed as long ago as 1937
“Risk is based on the assumption of a well-defined and constant objective probability distribution which is known and quite possible. Uncertainty has no scientific basis on which to form any calculable probability”.

The LTCM business model was also based on the use of extreme leverage (debt) to produce “supra-normal returns”. Even when the company was reporting its long run of apparently market beating profitability, without the aggressive use of leverage, the underlying returns on capital were only about 1%. Had these truly been “risk adjusted” returns they would have been much, much lower.

This strategy of extreme leverage also dramatically increased the potential existential threat were a liquidity crisis to occur, i.e. the inability to quickly sell assets to raise cash. This threat became reality when the global market sentiment turned against perceived risky assets of all types.

There was also huge asymmetry present in the timescales associated with the strategic risk and uncertainty that LTCM faced. The company built up a large position in so called “Equity Volatility” trades; in essence a bet that volatility on specific equities would regress to the mean over time. However, the time required to realise the potential upside in these trades was up to five years, whereas the timescale for a liquidity crisis to develop was measured in days or weeks.

When complex system experiences state changes – periods of enhanced volatility – positive feedback loops are always in evidence. In the case of LTCM one of these worked as follows:

  • As prices for risky assets fell, losses on positions on these assets, including derivatives, accelerated, especially as LTCM was very highly leveraged


  • LTCM needed to close out their position, i.e. sell to raise cash (e.g. for margin calls) on their risky trades


  • Selling was more difficult as the market became illiquid; potential buyers of LTCM positions were themselves selling, they didn’t want more risk, they wanted less


  • Prices in risky assets fell further and faster in an accelerating trend


  • LTCM needed to sell even more etc. etc.


The partners at LTCM believed that the underlying logic in their trading strategy was correct and that the markets were behaving irrationally. LTCM partners monitored their own risk – thus they were subject to all the individual and group biases that blinded them to the true state of the errors in their assumptions, risks they were assuming due to their business model and strategy and the nature of the uncertainties arising from the complex system in which they operated.

As Keynes once again famously observed
“The markets can remain irrational for longer than you can remain solvent”. When the end came for LTCM it came with “Stunning swiftness – in mid-August 1998 the partners still believed they were masters of a hugely successful enterprise, capitalized at $3.6bn. It took 5 weeks to lose it all” (from When Genius Failed: The Rise and Fall of Long Term Capital Management, Roger Lowenstein, Fourth Estate, London, 2001.

Lessons for all

It might be tempting to observe the history of LTCM as being an extraordinary example, a victim of a unique set of circumstances in a highly specialised sub-sector of a sub-sector of the financial services industry. But that would be a mistake. LTCM contains lessons for decision makers in all businesses, in all sectors, at all times.

  • The proximate cause of failure at LTCM was running out of cash – nearly always true in corporate failures as cash is literally the lifeblood of business – but the root causes were due to the combination of exogenous and endogenous factors.


  • Technology development always drives economic and market changes that frequently increase greater global connectivity and hence complexity, bringing with it increased non-linearity and thus greater uncertainty and “whole system” risk, where volatility is only normally distributed during periods of relative stability of the whole system and never normally distributed during periods of rapid change.


  • The assumptions made in adopting any given strategy and/or business model can drive strategic risk up significantly. Moreover, such core assumptions are far too often adopted as fact and neither challenged as to their initial veracity nor tested over time to determine how they may be becoming invalid.


  • Probabilistic calculations applied to uncertainty, especially subjectively based, are not just wrong but dangerous. The standard “probability times impact” of Risk Registers, when applied to uncertainties and strategic risk are not only inappropriate but give false assurance that such potential threats are “controlled”.


  • When an existential threat materialises, it happens with “Stunning swiftness” – it is nearly always a surprise. The possibility of surviving such a threat is significantly limited if the time to the threshold of a terminal event is less than the time it would take to implement mitigating actions. Thus time, not a meaningless probability, is the key parameter to assess in the context of existential strategic risk.


  • Individual, group and organisational biases will foster “familiarity with imperfect information” – risk blindness. Countering this tendency requires formal board level strategic risk governance processes, preferably independently facilitated to address the unavoidable biases inherent in “self-monitoring”.

Comments

Review of “Meltdown: Why Our Systems Fail and What We Can Do About It”

Published in March 2018, Meltdown is a book well-worth reading for executives and board directors seeking to better understand why strategic surprises and threats arise faster than ever in today’s economy — and what they can do to better anticipate and adapt to them.

For us, one of the most attractive aspects of Meltdown is that authors Chris Clearfield and Andras Tilscik explicitly confront both complexity and human nature as critical causes of system failure.

They begin with the observation that, as a wide variety of “systems have become more capable, they have also become more complex and less forgiving, creating an environment where small mistakes can turn into massive failures.”

The authors draw on Charles Perrow’s classic 1984 book “Normal Accidents” to explain why this is the case.

As we emphasize in our work at Britten Coyne Partners, Perrow noted how complex systems have many overlapping cause-effect relationships, which in many cases are characterized by non-linearity and/or time delays. In such systems, apparently small causes can have very disproportionate effects.

Perrow further noted how the presence or absence of “tight-coupling” between various parts of a complex system sets the stage for cascading non-linear effects that quickly outpace human operators’ and managers’ ability to understand what is happening and react appropriately.

As Clearwater and Tilczik observe, system “complexity and coupling create a danger zone, where small mistakes turn into meltdowns.”

One obvious approach is to mitigate this risk by reducing a system’s complexity and loosen the coupling between its component parts (i.e., by adding more slack). Unfortunately, the authors note that, “in recent decades, the world has actually been moving in the opposite direction”, with more systems now operating in the danger zone.

Indeed, as we have often noted, that is one of the key consequences of the dramatic increase in connectivity wrought by the internet, as well as increased automation of many decisions and processes.

After providing painful illustrations of various meltdowns that have occurred in recent years, across a wide range of increasingly complex and tightly coupled systems (from the failures of Knight Trading to Deepwater Horizon), the authors arrive at what for us is the book’s most important conclusion: “As our systems change, so must our ways of managing them.”

Too often, organizations focus on risks that are easy to observe, measure, and manage (e.g., “slips, trips, and falls”), rather than the complex uncertainties that pose the most dangerous threats to their survival.

Clearfield and Tilczik then review a number of steps organizations can take to reduce the risk of meltdowns. These included solutions that are familiar to Britten Coyne Partners’ clients, such as Pre-Mortem analyses and other techniques for surfacing dissenting views, and for being alert to the advance warnings of disaster that complex systems often provide in the form of anomalies, near misses, and other surprises that we ignore at our peril.

A powerful and widely overlooked point made by the authors is that, "in the age of complex systems, diversity is a great risk management tool." As they note, diverse teams are less likely to be plagued by the dangers of excessive conformity: "in diverse groups, we don’t trust each other’s judgment quite as much…Everyone is more skeptical…Diversity makes us work harder and ask tougher questions.”

We share Clearwater and Tilczik's final conclusion that putting the solutions they recommend into practice can be tough. Like them, however, we also know from experience that it is not impossible, and have seen the exceptional benefits that that make the extra effort involved well worthwhile.

Comments

Getting More Out of PEST/PESTLE/STEEPLED Analyses

There probably isn’t anyone reading this who has not been through a strategic planning exercise that attempts to use a structured approach to brainstorm opportunities and threats in an organization’s external environment. These are known by a variety of acronyms, including PEST (political, economic, socio-cultural, and technology), PESTLE (PEST plus legal and environmental), and STEEPLED (PESTLE plus ethical and demographic issues).

Unfortunately, far too many people come away from these exercises feeling frustrated that they have generated a laundry list of issues and related opportunities and threats, but little else.

Having been in those shoes many times over the years, at Britten Coyne we set out to develop a better methodology. Our primary goal was to enable clients to develop an integrated mental model that would enable them to maintain a high degree of situation awareness about emerging threats in a dynamically evolving complex adaptive system – i.e., the confusing world in which they must make decisions.

Our starting point was the historical observation that the causes that give rise to strategic threats do not operate in isolation from one another. Rather, they tend to unfold and accumulate in a rough chronological order, albeit with time delays and feedback loops between them that often produce non-linear effects.

As the economist Rudiger Dornbusch famously observed, “a crisis takes a much longer time coming than you think, and then it happens much faster than you would have thought.” The ancient Roman philosopher Seneca observed this same phenomenon far earlier, noting that, “fortune is of sluggish growth but ruin is rapid.”

The following chart highlights that the changes we observe in different areas at any point in time are actually part of a much more complex and integrated change process.

Dynamics_Chart


In our experience, you develop situation awareness about the dynamics of this system by asking these three questions about each issue area:

  1. What are the key trends and uncertainties – i.e., those that could have the largest impact in terms of the threats they could produce (or, from a strategy perspective, the opportunities)?

  1. What are the key stocks and flows within each area? This systems dynamics approach focuses attention on one of the key drivers of non-linear change in complex adaptive systems – accumulating or decumulating stocks (e.g., levels of debt or inequality, or the capability of a technology) that reach and then exceed the system's carrying capacity. While media attention typically focuses on flows (e.g., an annual earnings report or this year’s government deficit), major discontinuities in the state of a complex adaptive system are often caused by stocks reaching a critical threshold or tipping point.

  1. What are the key feedback loops at work, both within each area, and between them? Positive feedback loops are especially important, as they can cause flows to rapidly accelerate and quickly trigger substantial nonlinear effects both within a given issue area and often across others as well.

With a better (but certainly not perfect) understanding of the key elements in a complex adaptive system, and the relationships between them, you are in a far better position to identify potential discontinuities (and their cascading impacts over time) in order to develop more accurate estimates of how the system of interest could evolve in the future, and the threats and opportunities it could produce.

To be sure, another feature of complex adaptive systems is that due to their evolutionary nature, forecast accuracy tends to decline exponentially as the time horizon lengthens. The good news, however, is that as many authors have shown (e.g., “
How Much Can Firms Know?” by Ormerod and Rosewell), even a little bit of foresight advantage can confer substantial benefits when an individual, organization, or nation state is competing in a complex adaptive system.

Comments

Questions for Audit Committees About Your Risk Register

Once upon a time, I was a CFO who dutifully prepared and updated our company's Risk Register and reviewed it with our Audit Committee. Over time, however, I came to appreciate this tool's shortcomings as well as its strengths. Indeed, this could also be said for our overall Enterprise Risk Management process. I've now had six years to further reflect on and write about these issues here at Britten Coyne Partners as we've interacted with our clients and conducted additional research. I've concluded that there are some important questions that Audit Committees need to ask about their Risk Registers, which can lead to discussions that produce deeper and more important insights about risk governance.

Question #1: What risks are included? And more important, what's missing?

The Risk Registers I've seen over the years are generally long risks whose likelihood and potential impact are easy to quantify, and short those that are not. Moreover, the easier a risk is to identify (i.e., discrete risk events) and quantify, the easier it is to price and transfer, via insurance or financial derivative markets. Hence this also makes it easy to identify and quantify the impact of risk mitigation options. Unfortunately, uncertainties that represent true existential threats to companies survival typically don't meet these tests.

How many Risk Registers include risks to the growth rate and size of served markets, or to the strength of a company's value proposition within those markets, or to the sustainability of its business model's economics, or the health of its innovation processes? Because these are usually true uncertainties, rather than easily quantified risks, too many Risk Registers fail to include them, or do so in a manner that is far too generic.

Question #2: Do Risk Likelihood and Risk Impact capture what really kills companies?

Think about all the stories you've read or heard about how different companies failed. What is perhaps the most common plot line you hear? In our experience, it is this: "They waited too long to act."

This brings us to the most glaring omission from the Risk Register concept: Time Dynamics. Our education and consulting work with clients focuses on three issues: (1) early anticipation of emerging threats; (2) their accurate assessment; and (3) adapting to them in time. We stress the need to estimate the rate at which a new threat is developing, and the time remaining before it reaches a critical threshold.

In light of this, it isn't enough to simply develop "mitigation actions" or "adaptation options." You also need to estimate how long it will take (and how much it will cost) to put them in place, and the likelihood they will be sufficient to adequately respond to the threat (at minimum, this means keeping the company from failing because of the new threat).

Unfortunately, few Risk Registers tell you anything about time dynamics. Instead, they focus on the likelihood a threat will develop, but usually don't discuss what "develop" means in terms of a specific threshold and time period.

Question #3: Will those mitigation actions really reduce the potential risk impact?

Many Risk Registers significantly reduce the potential negative impact of different risks by netting them against the presumed benefits of various risk mitigation options. This can make them look far less dangerous than they really are.

In too many cases, however, little or no detail is given about how long those mitigation actions will take to implement, how much they will cost, where they stand today, their chances of success, and the range of possible positive impacts they could have if the risk actually materializes. Rather, these Risk Registers blithely assume that the legendary "Risk Mitigation Cavalry" can be counted on to ride over the hill in time to save the day. Too many non-executive directors have learned the hard way that believing in this story without asking tough questions about it can turn out to be a very costly decision.
Comments