Yoda is Right About Failure

In the movie “The Last Jedi”, Yoda utters this wonderful quote to Luke Skywalker:

Heeded my words not, did you? ‘Pass on what you have learned.’ Strength, mastery, hmm...but weakness, folly, failure, also. Yes, failure, most of all. The greatest teacher, failure is.”

At Britten Coyne Partners, we could not agree more with Yoda. And with that in mind, we offer you this summer list of some of our favorite books about failure (from individual to organizational to societal), and the many lessons it can teach us.

·     “The Logic of Failure”, by Dietrich Dorner
·     “
Normal Accidents”, by Charles Perrow
·     “
Flirting with Disaster", by Gerstein and Ellsberg
·     “
The Field Guide to Understanding Human Error”, by Sidney Dekker
·     “
Meltdown”, by Clearfield and Tilcsik
·     "
Inviting Disaster”, by James Chiles
·     “
Why Decisions Fail”, by Paul Nutt
·     “
Why Most Things Fail”, by Paul Ormerod
·     “
The Limits of Strategy”, by Ernest von Simson
·     “
How the Mighty Fall”, by Jim Collins
·     “
Surprise Attack”, by Richard Betts
·     “
Surprise Attack”, by Ephraim Kam
·     "
Pearl Harbor: Warning and Decision", by Roberta Wohlstetter
·     "
Why Intelligence Fails", by Robert Jervis
·     “
Military Misfortunes”, by Eliot Cohen and John Gooch
·     “
This Time Is Different”, by Reinhart and Rogoff
·     “
Irrational Exuberance”, by Robert Shiller
·     “
Manias, Panics, and Crashes”. by Charles Kindleberger
·     “
Crashes, Crises, and Calamities”, by Len Fisher
·     “
The Upside of Down”, by Thomas Homer Dixon
·     “
Understanding Collapse”, by Guy Middleton
·     “
Why Nations Fail”, by Acemoglu and Robinson
·     “
The Rise and Fall of the Great Powers”, by Paul Kennedy
·     “
The Rise and Decline of Nations”, by Mancur Olson
·     “
The Collapse of Complex Societies”, by Joseph Tainter
·     “
The Seneca Effect”, by Ugo Bardi
Comments

However Awkward, Boards Need to Confront Overconfident CEOs

In “The Board’s Role in Strategy in a Changing Environment”, Reeves et a from BCG’s Henderson Institute note that in a complex and fast changing world, “corporate strategy is becoming both more important and increasingly challenging for today’s leaders.” They note that both investors and CEO’s are saying that boards need to spend more time on strategy.

We fully agree with these points. However, our research into the relationship between board chairs and CEOs raised an even more important point, which has recently appeared in two new research papers.

In “CEO Overconfidence and the Probability of Corporate Failure”, Leng et al find that, unsurprisingly, increasing CEO overconfidence raises the probability of firm bankruptcy. What was more interesting was their finding that large boards had a bigger impact on reducing the bankruptcy risk associated with overconfident CEOs than small boards. But when a CEO is not overconfident, the latter proved to be more effective.

However, another paper makes it clear that restraining an overconfident CEO is something that many boards find easier said than done. In “Director Perceptions of Their Board’s Effectiveness, Size and Composition, Dynamics, and Internal Governance”, Cheng et al note that almost all directors reported that the size of their board was “just right”, despite the wide variation in actual board size.

Moreover, while directors generally highly rated their board’s effectiveness, the weakest ratings were typically given to their performance in evaluating their CEO. The authors note that, “boards seem to see their primary function as providing counsel to, rather than monitoring the CEO.” This finding was backed by some painful director quotes, including

“We have not been effective in dealing with a highly aggressive CEO”

“Our board has been too slow to move on poorly performing CEOs”

“We put too much trust in the CEO and management team”

To be sure, this is not a new phenomenon. For example, in Berkshire Hathaway’s 1988 Annual Report, Warren Buffett famously observed hat “At board meetings, criticism of the CEO’s performance is often viewed as the social equivalent of belching.”

Unfortunately, heightened uncertainty tends to make human beings – including management teams and board directors – more likely to conform to the views of the group, even when it is led by an overconfident CEO. Indeed, in the face of uncertainty, overconfidence often increases in order to keep feelings of confusion and vulnerability at bay. You can see how this can easily trigger social dynamics that lead to organizational crisis and failure.

Challenging an overconfident CEO is never easy. But it is often one of the most critical activities non-executive chairs and directors perform.



Comments

Asking the Right Forecasting Questions

During the four years I spent on the Good Judgement Project team, I learned and applied a set of techniques that, as shown in Philip Tetlock's book "Superforecasting" significantly improved forecast accuracy, even in the case of complex socio-technical systems.

Far less noted, however, was the second critical insight from the GJP: the importance of asking the right forecasting questions. Put differently, the real value of a forecast is a function of both the question asked and the accuracy of the answer provided.

This raises the issue of just how an individual or organization should go about deciding on the forecasting questions to ask.

There is no obvious answer.

A rough analogy is to three types of reasoning: inductive, deductive, and abductive. In the case of induction, there are well-known processes to follow when weighing evidence to reach a conclusion (see our previous blog post on this).

Even more well-developed are the rules for logically deducing a conclusion from major and minor premises.

By far the least well codified type of reasoning is abduction — the process of generating plausible causes for an observed or hypothesized effect (sometimes called "inference to the best explanation").

To complete the analogy, we use abduction to generate forecasting questions, which often ask us to estimate the probability of that a hypothesized future effect will occur within a specified time frame.

To develop our estimate, we use abduction to generate plausible causal hypotheses of how the effect could occur. We then use deduction to identify high value evidence (i.e., indicators) we would be very likely to observe (or not observe) if the causal hypothesis were true (or false).

After seeking to collect or observe the evidence for and against various hypotheses, we use induction to weigh it and reach a conclusion — which in this example takes the form of our probability estimate.

So far, so good. But this begs the question of what guides our adductive reasoning. Taking Judea Pearl's hierarchy of different types of reasoning, we can identify three approaches to generating forecasting questions.

Pearl's lowest level of reasoning is associational (also sometimes called correlational). Basically, if a set of factors (e.g., observable evidence) existed in the past at the same time as a given effect, we assume that the effect will also occur in the future if the same set of factors exist or occur. Note that there is no assumption here of causation; only statistical association.

Simple historical reasoning provides an example of this; given an important effect that occurs multiple times, we can seek factors that are common to the given cases to use to formulate forecasting questions related to the potential for the same effect to occur in the future. To be sure, this is an imperfect approach, because the complex systems that produce observed historical outcomes are themselves constantly adapting and evolving. It is for this reason that it is often said that while history seldom repeats, it often rhymes.

Pearl's next highest level of cognition explicitly creates mental models or more complex theories that logically link causes to effects. These theories can result from both qualitative and quantitative analysis. As noted above, we can use deduction to predict the future effects, assuming a given theory is true and specific causes occur. Hence, different causal theories (usually put forth by different experts) can be used to formulate forecasting questions.

Pearl's highest level of cognition is counterfactual reasoning (e.g., "if I hadn't done that, this wouldn't have happened" or "if I had done that instead, it would have produced this result"). One way to use counterfactual reasoning to generate forecasting questions is via the pre-mortem technique, in which you assume a plan has failed or a forecast has been badly wrong, and ask why this happened, including the evidence you missed and what you could have done differently. The results of pre-mortem analyses are often a rich source of new forecasting questions.

In sum, avoiding strategic failure is as much about taking time to formulate the right forecasting questions as it is using methods to enhance the accuracy with which they are answered.



Comments

New Research: "Is Belief Superiority Justified by Superior Knowledge?"

The title of this post is taken from a recently published research paper by Michael Hall and Kaitlin Raimi. The authors observe that it is not hard to find people who exhibit "the belief that their own views are more correct than other viewpoints" or "belief superiority." They also note a strong positive correlation between "belief superiority" and the degree of confidence that they are correct. An important distinction is that belief superiority results from a comparative judgment, while the latter reflects the strengths of one's convictions.

The focus of Hall and Raimi's research was the extent to which individuals' views about the superiority of their beliefs was justified. As they note, "for a belief to be superior — or more correct — than other beliefs, it should have a superior basis in relevant factual information. Following this logic, belief-superior individuals should possess more accurate knowledge than their more modest peers, or at least better recognize relevant facts when presented with them."

When confronted with this research question, and calling to mind people they have encountered with "belief superiority complex", most people reading this probably have a strong intuition about what the authors research found.

"Belief superior people exhibited the greatest gaps between their perceived and actual knowledge."

However, Hall and Raimi go on to note that, "even if belief superiority is not supported by superior knowledge, belief superiority could be justified by another process: Superior knowledge acquisition. That is, they may seek out information on a topic in an even-handed manner that exposes them to a diversity of viewpoints. As a result, their belief superiority may reflect a reasoned conclusion after comparing multiple viewpoints."

Unsurprisingly, that is not what the authors found. Instead, "belief superior people were most likely to exhibit a preference for information that supported their pre-existing views."

In sum, "belief superior people are not only the least likely to recognize their own knowledge shortcomings, but also the least likely to remedy them."

In our research into the root causes of corporate failures, we have frequently noted that organizational failures to anticipate threats, accurately assess them, and adapt to them in time are driven by fundamental individual and group cognitive and emotional factors that are extremely difficult to change (because, for centuries in our past they were beneficial in the evolutionary sense, and provided an advantage when it came to survival, resource acquisition, and mating).

Hall and Raimi's research findings are yet another example of the deeply rooted individual factors (which are frequently reinforced by group processes) that are the deepest root causes of corporate failure.

As we repeatedly emphasize, the chances of altering these factors through training or incentives are somewhere between slim and none. Instead, organizations' best hope for survival rests on designing processes, systems, and structures that deliberately seek to offset individual and group factors' predictably negative effects.

Comments

Three Techniques for Weighing Evidence to Reach a Conclusion

In a radically uncertain world, the ability to systematically weigh evidence to reach a justifiable conclusion is undoubtedly a critical skill. Unfortunately, it is one that too many schools fail to teach. Hence this short note, which will cover some basic aspects of evidence, and quickly review three approaches to weighing it.

Evidence has been defined as “any factual datum which in some manner assists in drawing conclusions, either favorable or unfavorable, retarding a hypothesis.”

Broadly, there are at least four types of evidence:

  • Corroborating: Two or more sources report same information, or one source reports the information and the other attests to the first’s credibility;

  • Convergent: Two or more sources provide information about different events, all of which support the same hypothesis;

  • Contradictory evidence is two or more pieces of information that are mutually exclusive, and cannot both or all be true;

  • Conflicting evidence supports different hypotheses, but the pieces of information are not mutually exclusive.

Regardless of its type, all evidence has three fundamental properties:

  • Relevance: “Relevant evidence is evidence having any tendency to make [a hypothesis] more or less probable than it would be without the evidence” (from the US Federal Rules of Evidence);

  • Believability: Is a function of the credibility and competence of the source of the evidence;

  • Probative Force or Weight: Is concerned with the incremental impact of a piece of evidence on the probabilities associated with one or more of the hypotheses under consideration.

There are three systematic approaches to weighing evidence in order to reach a conclusion.

In the 17
th century, Sir Francis Bacon developed a method for weighing evidence. Bacon believed the weight of evidence for or against a hypothesis depends on both how much relevant and credible evidence you have, and on how complete your evidence is with respect to matters which you believe are relevant to evaluating the hypothesis.

Bacon recognized that we can be “out on an evidential limb” if we draw conclusions about the probability a hypothesis is true based on our existing evidence without also taking into account the number relevant questions that are still not answered by the evidence in our possession. We typically fill in these gaps with assumptions, about which we have varying degrees of uncertainty.

In the 18
th century, Reverend Thomas Bayes invented a quantitative method for using new information to update a prior degree of belief in the truth of a hypothesis.

”Bayes Theorem” says that given new evidence (E), the updated (posterior) belief that a hypothesis is true (p(H|E) is a function of the conditional probability of observing the evidence given the hypothesis (p(E|H), times the prior probability that the hypothesis is true (p(H)), divided by the probability of observing the new evidence (p(E)).

In qualitative terms, we start with a prior belief in the probability a hypothesis is true or false. When we receive a new piece of evidence, we use it to update our prior probability to a new, posterior probability.

The “Likelihood Ratio” is a critical concept in this process of Bayesian updating. It is the probability of observing a piece of evidence if a hypothesis is true divided by the probability of observing the evidence if the hypothesis is false. The greater the Likelihood ratio for a piece of new evidence (i.e., the greater the information value of the new evidence), the larger should be the difference between our prior and posterior probabilities that a give hypothesis is true.

In the 20
th century, Arthur Dempster and Glenn Shafer developed a new theory of evidence.

Assume a set of competing hypotheses. For each of these hypotheses, a new piece of evidence is assigned to one of three categories: (1) It supports the hypothesis; (2) It disconfirms the hypothesis (i.e., it supports “Not-H”); or (3) it neither supports nor disconfirms the hypothesis.

The accumulated and categorized evidence can then be used to calculate a lower bound on the belief that each hypothesis is true (based on the number of pieces of evidence that support them, and the quality of that evidence), as well as an upper bound (equal to one less the probability that the hypothesis is false, again, based on the evidence that disconfirms the hypothesis, and its quality). This upper bound is also known at the plausibility of each hypothesis.

The difference between the upper (plausibility) and lower (belief) probabilities for each hypothesis is the degree of uncertainty associated with it. Hypotheses are then ranked based on their degrees of uncertainty.

While there are quantitative methods for applying all of these theories, they can also be applied qualitatively, to quickly and systematically produce an initial conclusion about which of a given set of hypotheses is most likely to be true.

Comments