Getting More Out of PEST/PESTLE/STEEPLED Analyses

There probably isn’t anyone reading this who has not been through a strategic planning exercise that attempts to use a structured approach to brainstorm opportunities and threats in an organization’s external environment. These are known by a variety of acronyms, including PEST (political, economic, socio-cultural, and technology), PESTLE (PEST plus legal and environmental), and STEEPLED (PESTLE plus ethical and demographic issues).

Unfortunately, far too many people come away from these exercises feeling frustrated that they have generated a laundry list of issues and related opportunities and threats, but little else.

Having been in those shoes many times over the years, at Britten Coyne we set out to develop a better methodology. Our primary goal was to enable clients to develop an integrated mental model that would enable them to maintain a high degree of situation awareness about emerging threats in a dynamically evolving complex adaptive system – i.e., the confusing world in which they must make decisions.

Our starting point was the historical observation that the causes that give rise to strategic threats do not operate in isolation from one another. Rather, they tend to unfold and accumulate in a rough chronological order, albeit with time delays and feedback loops between them that often produce non-linear effects.

As the economist Rudiger Dornbusch famously observed, “a crisis takes a much longer time coming than you think, and then it happens much faster than you would have thought.” The ancient Roman philosopher Seneca observed this same phenomenon far earlier, noting that, “fortune is of sluggish growth but ruin is rapid.”

The following chart highlights that the changes we observe in different areas at any point in time are actually part of a much more complex and integrated change process.

Dynamics_Chart


In our experience, you develop situation awareness about the dynamics of this system by asking these three questions about each issue area:

  1. What are the key trends and uncertainties – i.e., those that could have the largest impact in terms of the threats they could produce (or, from a strategy perspective, the opportunities)?

  1. What are the key stocks and flows within each area? This systems dynamics approach focuses attention on one of the key drivers of non-linear change in complex adaptive systems – accumulating or decumulating stocks (e.g., levels of debt or inequality, or the capability of a technology) that reach and then exceed the system's carrying capacity. While media attention typically focuses on flows (e.g., an annual earnings report or this year’s government deficit), major discontinuities in the state of a complex adaptive system are often caused by stocks reaching a critical threshold or tipping point.

  1. What are the key feedback loops at work, both within each area, and between them? Positive feedback loops are especially important, as they can cause flows to rapidly accelerate and quickly trigger substantial nonlinear effects both within a given issue area and often across others as well.

With a better (but certainly not perfect) understanding of the key elements in a complex adaptive system, and the relationships between them, you are in a far better position to identify potential discontinuities (and their cascading impacts over time) in order to develop more accurate estimates of how the system of interest could evolve in the future, and the threats and opportunities it could produce.

To be sure, another feature of complex adaptive systems is that due to their evolutionary nature, forecast accuracy tends to decline exponentially as the time horizon lengthens. The good news, however, is that as many authors have shown (e.g., “
How Much Can Firms Know?” by Ormerod and Rosewell), even a little bit of foresight advantage can confer substantial benefits when an individual, organization, or nation state is competing in a complex adaptive system.

Comments

Questions for Audit Committees About Your Risk Register

Once upon a time, I was a CFO who dutifully prepared and updated our company's Risk Register and reviewed it with our Audit Committee. Over time, however, I came to appreciate this tool's shortcomings as well as its strengths. Indeed, this could also be said for our overall Enterprise Risk Management process. I've now had six years to further reflect on and write about these issues here at Britten Coyne Partners as we've interacted with our clients and conducted additional research. I've concluded that there are some important questions that Audit Committees need to ask about their Risk Registers, which can lead to discussions that produce deeper and more important insights about risk governance.

Question #1: What risks are included? And more important, what's missing?

The Risk Registers I've seen over the years are generally long risks whose likelihood and potential impact are easy to quantify, and short those that are not. Moreover, the easier a risk is to identify (i.e., discrete risk events) and quantify, the easier it is to price and transfer, via insurance or financial derivative markets. Hence this also makes it easy to identify and quantify the impact of risk mitigation options. Unfortunately, uncertainties that represent true existential threats to companies survival typically don't meet these tests.

How many Risk Registers include risks to the growth rate and size of served markets, or to the strength of a company's value proposition within those markets, or to the sustainability of its business model's economics, or the health of its innovation processes? Because these are usually true uncertainties, rather than easily quantified risks, too many Risk Registers fail to include them, or do so in a manner that is far too generic.

Question #2: Do Risk Likelihood and Risk Impact capture what really kills companies?

Think about all the stories you've read or heard about how different companies failed. What is perhaps the most common plot line you hear? In our experience, it is this: "They waited too long to act."

This brings us to the most glaring omission from the Risk Register concept: Time Dynamics. Our education and consulting work with clients focuses on three issues: (1) early anticipation of emerging threats; (2) their accurate assessment; and (3) adapting to them in time. We stress the need to estimate the rate at which a new threat is developing, and the time remaining before it reaches a critical threshold.

In light of this, it isn't enough to simply develop "mitigation actions" or "adaptation options." You also need to estimate how long it will take (and how much it will cost) to put them in place, and the likelihood they will be sufficient to adequately respond to the threat (at minimum, this means keeping the company from failing because of the new threat).

Unfortunately, few Risk Registers tell you anything about time dynamics. Instead, they focus on the likelihood a threat will develop, but usually don't discuss what "develop" means in terms of a specific threshold and time period.

Question #3: Will those mitigation actions really reduce the potential risk impact?

Many Risk Registers significantly reduce the potential negative impact of different risks by netting them against the presumed benefits of various risk mitigation options. This can make them look far less dangerous than they really are.

In too many cases, however, little or no detail is given about how long those mitigation actions will take to implement, how much they will cost, where they stand today, their chances of success, and the range of possible positive impacts they could have if the risk actually materializes. Rather, these Risk Registers blithely assume that the legendary "Risk Mitigation Cavalry" can be counted on to ride over the hill in time to save the day. Too many non-executive directors have learned the hard way that believing in this story without asking tough questions about it can turn out to be a very costly decision.
Comments

Yoda is Right About Failure

In the movie “The Last Jedi”, Yoda utters this wonderful quote to Luke Skywalker:

Heeded my words not, did you? ‘Pass on what you have learned.’ Strength, mastery, hmm...but weakness, folly, failure, also. Yes, failure, most of all. The greatest teacher, failure is.”

At Britten Coyne Partners, we could not agree more with Yoda. And with that in mind, we offer you this summer reading list of some of our favorite books about failure (from individual to organizational to societal), and the many lessons it can teach us.

·     “The Logic of Failure”, by Dietrich Dorner
·     “
Normal Accidents”, by Charles Perrow
·     “
Flirting with Disaster", by Gerstein and Ellsberg
·     “
The Field Guide to Understanding Human Error”, by Sidney Dekker
·     “
Meltdown”, by Clearfield and Tilcsik
·     "
Inviting Disaster”, by James Chiles
·     “
Why Decisions Fail”, by Paul Nutt
·     “
Why Most Things Fail”, by Paul Ormerod
·     “
The Limits of Strategy”, by Ernest von Simson
·     “
How the Mighty Fall”, by Jim Collins
·     “
Surprise Attack”, by Richard Betts
·     “
Surprise Attack”, by Ephraim Kam
·     "
Pearl Harbor: Warning and Decision", by Roberta Wohlstetter
·     "
Why Intelligence Fails", by Robert Jervis
·     “
Military Misfortunes”, by Eliot Cohen and John Gooch
·     “
This Time Is Different”, by Reinhart and Rogoff
·     “
Irrational Exuberance”, by Robert Shiller
·     “
Manias, Panics, and Crashes”. by Charles Kindleberger
·     “
Crashes, Crises, and Calamities”, by Len Fisher
·     “
The Upside of Down”, by Thomas Homer Dixon
·     “
Understanding Collapse”, by Guy Middleton
·     “
Why Nations Fail”, by Acemoglu and Robinson
·     “
The Rise and Fall of the Great Powers”, by Paul Kennedy
·     “
The Rise and Decline of Nations”, by Mancur Olson
·     “
The Collapse of Complex Societies”, by Joseph Tainter
·     “
The Seneca Effect”, by Ugo Bardi
Comments

However Awkward, Boards Need to Confront Overconfident CEOs

In “The Board’s Role in Strategy in a Changing Environment”, Reeves et a from BCG’s Henderson Institute note that in a complex and fast changing world, “corporate strategy is becoming both more important and increasingly challenging for today’s leaders.” They note that both investors and CEO’s are saying that boards need to spend more time on strategy.

We fully agree with these points. However, our research into the relationship between board chairs and CEOs raised an even more important point, which has recently appeared in two new research papers.

In “CEO Overconfidence and the Probability of Corporate Failure”, Leng et al find that, unsurprisingly, increasing CEO overconfidence raises the probability of firm bankruptcy. What was more interesting was their finding that large boards had a bigger impact on reducing the bankruptcy risk associated with overconfident CEOs than small boards. But when a CEO is not overconfident, the latter proved to be more effective.

However, another paper makes it clear that restraining an overconfident CEO is something that many boards find easier said than done. In “Director Perceptions of Their Board’s Effectiveness, Size and Composition, Dynamics, and Internal Governance”, Cheng et al note that almost all directors reported that the size of their board was “just right”, despite the wide variation in actual board size.

Moreover, while directors generally highly rated their board’s effectiveness, the weakest ratings were typically given to their performance in evaluating their CEO. The authors note that, “boards seem to see their primary function as providing counsel to, rather than monitoring the CEO.” This finding was backed by some painful director quotes, including

“We have not been effective in dealing with a highly aggressive CEO”

“Our board has been too slow to move on poorly performing CEOs”

“We put too much trust in the CEO and management team”

To be sure, this is not a new phenomenon. For example, in Berkshire Hathaway’s 1988 Annual Report, Warren Buffett famously observed hat “At board meetings, criticism of the CEO’s performance is often viewed as the social equivalent of belching.”

Unfortunately, heightened uncertainty tends to make human beings – including management teams and board directors – more likely to conform to the views of the group, even when it is led by an overconfident CEO. Indeed, in the face of uncertainty, overconfidence often increases in order to keep feelings of confusion and vulnerability at bay. You can see how this can easily trigger social dynamics that lead to organizational crisis and failure.

Challenging an overconfident CEO is never easy. But it is often one of the most critical activities non-executive chairs and directors perform.



Comments

Asking the Right Forecasting Questions

During the four years I spent on the Good Judgement Project team, I learned and applied a set of techniques that, as shown in Philip Tetlock's book "Superforecasting" significantly improved forecast accuracy, even in the case of complex socio-technical systems.

Far less noted, however, was the second critical insight from the GJP: the importance of asking the right forecasting questions. Put differently, the real value of a forecast is a function of both the question asked and the accuracy of the answer provided.

This raises the issue of just how an individual or organization should go about deciding on the forecasting questions to ask.

There is no obvious answer.

A rough analogy is to three types of reasoning: inductive, deductive, and abductive. In the case of induction, there are well-known processes to follow when weighing evidence to reach a conclusion (see our previous blog post on this).

Even more well-developed are the rules for logically deducing a conclusion from major and minor premises.

By far the least well codified type of reasoning is abduction — the process of generating plausible causes for an observed or hypothesized effect (sometimes called "inference to the best explanation").

To complete the analogy, we use abduction to generate forecasting questions, which often ask us to estimate the probability of that a hypothesized future effect will occur within a specified time frame.

To develop our estimate, we use abduction to generate plausible causal hypotheses of how the effect could occur. We then use deduction to identify high value evidence (i.e., indicators) we would be very likely to observe (or not observe) if the causal hypothesis were true (or false).

After seeking to collect or observe the evidence for and against various hypotheses, we use induction to weigh it and reach a conclusion — which in this example takes the form of our probability estimate.

So far, so good. But this begs the question of what guides our adductive reasoning. Taking Judea Pearl's hierarchy of different types of reasoning, we can identify three approaches to generating forecasting questions.

Pearl's lowest level of reasoning is associational (also sometimes called correlational). Basically, if a set of factors (e.g., observable evidence) existed in the past at the same time as a given effect, we assume that the effect will also occur in the future if the same set of factors exist or occur. Note that there is no assumption here of causation; only statistical association.

Simple historical reasoning provides an example of this; given an important effect that occurs multiple times, we can seek factors that are common to the given cases to use to formulate forecasting questions related to the potential for the same effect to occur in the future. To be sure, this is an imperfect approach, because the complex systems that produce observed historical outcomes are themselves constantly adapting and evolving. It is for this reason that it is often said that while history seldom repeats, it often rhymes.

Pearl's next highest level of cognition explicitly creates mental models or more complex theories that logically link causes to effects. These theories can result from both qualitative and quantitative analysis. As noted above, we can use deduction to predict the future effects, assuming a given theory is true and specific causes occur. Hence, different causal theories (usually put forth by different experts) can be used to formulate forecasting questions.

Pearl's highest level of cognition is counterfactual reasoning (e.g., "if I hadn't done that, this wouldn't have happened" or "if I had done that instead, it would have produced this result"). One way to use counterfactual reasoning to generate forecasting questions is via the pre-mortem technique, in which you assume a plan has failed or a forecast has been badly wrong, and ask why this happened, including the evidence you missed and what you could have done differently. The results of pre-mortem analyses are often a rich source of new forecasting questions.

In sum, avoiding strategic failure is as much about taking time to formulate the right forecasting questions as it is using methods to enhance the accuracy with which they are answered.



Comments