Questions for Audit Committees About Your Risk Register

Once upon a time, I was a CFO who dutifully prepared and updated our company's Risk Register and reviewed it with our Audit Committee. Over time, however, I came to appreciate this tool's shortcomings as well as its strengths. Indeed, this could also be said for our overall Enterprise Risk Management process. I've now had six years to further reflect on and write about these issues here at Britten Coyne Partners as we've interacted with our clients and conducted additional research. I've concluded that there are some important questions that Audit Committees need to ask about their Risk Registers, which can lead to discussions that produce deeper and more important insights about risk governance.

Question #1: What risks are included? And more important, what's missing?

The Risk Registers I've seen over the years are generally long risks whose likelihood and potential impact are easy to quantify, and short those that are not. Moreover, the easier a risk is to identify (i.e., discrete risk events) and quantify, the easier it is to price and transfer, via insurance or financial derivative markets. Hence this also makes it easy to identify and quantify the impact of risk mitigation options. Unfortunately, uncertainties that represent true existential threats to companies survival typically don't meet these tests.

How many Risk Registers include risks to the growth rate and size of served markets, or to the strength of a company's value proposition within those markets, or to the sustainability of its business model's economics, or the health of its innovation processes? Because these are usually true uncertainties, rather than easily quantified risks, too many Risk Registers fail to include them, or do so in a manner that is far too generic.

Question #2: Do Risk Likelihood and Risk Impact capture what really kills companies?

Think about all the stories you've read or heard about how different companies failed. What is perhaps the most common plot line you hear? In our experience, it is this: "They waited too long to act."

This brings us to the most glaring omission from the Risk Register concept: Time Dynamics. Our education and consulting work with clients focuses on three issues: (1) early anticipation of emerging threats; (2) their accurate assessment; and (3) adapting to them in time. We stress the need to estimate the rate at which a new threat is developing, and the time remaining before it reaches a critical threshold.

In light of this, it isn't enough to simply develop "mitigation actions" or "adaptation options." You also need to estimate how long it will take (and how much it will cost) to put them in place, and the likelihood they will be sufficient to adequately respond to the threat (at minimum, this means keeping the company from failing because of the new threat).

Unfortunately, few Risk Registers tell you anything about time dynamics. Instead, they focus on the likelihood a threat will develop, but usually don't discuss what "develop" means in terms of a specific threshold and time period.

Question #3: Will those mitigation actions really reduce the potential risk impact?

Many Risk Registers significantly reduce the potential negative impact of different risks by netting them against the presumed benefits of various risk mitigation options. This can make them look far less dangerous than they really are.

In too many cases, however, little or no detail is given about how long those mitigation actions will take to implement, how much they will cost, where they stand today, their chances of success, and the range of possible positive impacts they could have if the risk actually materializes. Rather, these Risk Registers blithely assume that the legendary "Risk Mitigation Cavalry" can be counted on to ride over the hill in time to save the day. Too many non-executive directors have learned the hard way that believing in this story without asking tough questions about it can turn out to be a very costly decision.
Comments

Yoda is Right About Failure

In the movie “The Last Jedi”, Yoda utters this wonderful quote to Luke Skywalker:

Heeded my words not, did you? ‘Pass on what you have learned.’ Strength, mastery, hmm...but weakness, folly, failure, also. Yes, failure, most of all. The greatest teacher, failure is.”

At Britten Coyne Partners, we could not agree more with Yoda. And with that in mind, we offer you this summer reading list of some of our favorite books about failure (from individual to organizational to societal), and the many lessons it can teach us.

·     “The Logic of Failure”, by Dietrich Dorner
·     “
Normal Accidents”, by Charles Perrow
·     “
Flirting with Disaster", by Gerstein and Ellsberg
·     “
The Field Guide to Understanding Human Error”, by Sidney Dekker
·     “
Meltdown”, by Clearfield and Tilcsik
·     "
Inviting Disaster”, by James Chiles
·     “
Why Decisions Fail”, by Paul Nutt
·     “
Why Most Things Fail”, by Paul Ormerod
·     “
The Limits of Strategy”, by Ernest von Simson
·     “
How the Mighty Fall”, by Jim Collins
·     “
Surprise Attack”, by Richard Betts
·     “
Surprise Attack”, by Ephraim Kam
·     "
Pearl Harbor: Warning and Decision", by Roberta Wohlstetter
·     "
Why Intelligence Fails", by Robert Jervis
·     “
Military Misfortunes”, by Eliot Cohen and John Gooch
·     “
This Time Is Different”, by Reinhart and Rogoff
·     “
Irrational Exuberance”, by Robert Shiller
·     “
Manias, Panics, and Crashes”. by Charles Kindleberger
·     “
Crashes, Crises, and Calamities”, by Len Fisher
·     “
The Upside of Down”, by Thomas Homer Dixon
·     “
Understanding Collapse”, by Guy Middleton
·     “
Why Nations Fail”, by Acemoglu and Robinson
·     “
The Rise and Fall of the Great Powers”, by Paul Kennedy
·     “
The Rise and Decline of Nations”, by Mancur Olson
·     “
The Collapse of Complex Societies”, by Joseph Tainter
·     “
The Seneca Effect”, by Ugo Bardi
Comments

However Awkward, Boards Need to Confront Overconfident CEOs

In “The Board’s Role in Strategy in a Changing Environment”, Reeves et a from BCG’s Henderson Institute note that in a complex and fast changing world, “corporate strategy is becoming both more important and increasingly challenging for today’s leaders.” They note that both investors and CEO’s are saying that boards need to spend more time on strategy.

We fully agree with these points. However, our research into the relationship between board chairs and CEOs raised an even more important point, which has recently appeared in two new research papers.

In “CEO Overconfidence and the Probability of Corporate Failure”, Leng et al find that, unsurprisingly, increasing CEO overconfidence raises the probability of firm bankruptcy. What was more interesting was their finding that large boards had a bigger impact on reducing the bankruptcy risk associated with overconfident CEOs than small boards. But when a CEO is not overconfident, the latter proved to be more effective.

However, another paper makes it clear that restraining an overconfident CEO is something that many boards find easier said than done. In “Director Perceptions of Their Board’s Effectiveness, Size and Composition, Dynamics, and Internal Governance”, Cheng et al note that almost all directors reported that the size of their board was “just right”, despite the wide variation in actual board size.

Moreover, while directors generally highly rated their board’s effectiveness, the weakest ratings were typically given to their performance in evaluating their CEO. The authors note that, “boards seem to see their primary function as providing counsel to, rather than monitoring the CEO.” This finding was backed by some painful director quotes, including

“We have not been effective in dealing with a highly aggressive CEO”

“Our board has been too slow to move on poorly performing CEOs”

“We put too much trust in the CEO and management team”

To be sure, this is not a new phenomenon. For example, in Berkshire Hathaway’s 1988 Annual Report, Warren Buffett famously observed hat “At board meetings, criticism of the CEO’s performance is often viewed as the social equivalent of belching.”

Unfortunately, heightened uncertainty tends to make human beings – including management teams and board directors – more likely to conform to the views of the group, even when it is led by an overconfident CEO. Indeed, in the face of uncertainty, overconfidence often increases in order to keep feelings of confusion and vulnerability at bay. You can see how this can easily trigger social dynamics that lead to organizational crisis and failure.

Challenging an overconfident CEO is never easy. But it is often one of the most critical activities non-executive chairs and directors perform.



Comments

Asking the Right Forecasting Questions

During the four years I spent on the Good Judgement Project team, I learned and applied a set of techniques that, as shown in Philip Tetlock's book "Superforecasting" significantly improved forecast accuracy, even in the case of complex socio-technical systems.

Far less noted, however, was the second critical insight from the GJP: the importance of asking the right forecasting questions. Put differently, the real value of a forecast is a function of both the question asked and the accuracy of the answer provided.

This raises the issue of just how an individual or organization should go about deciding on the forecasting questions to ask.

There is no obvious answer.

A rough analogy is to three types of reasoning: inductive, deductive, and abductive. In the case of induction, there are well-known processes to follow when weighing evidence to reach a conclusion (see our previous blog post on this).

Even more well-developed are the rules for logically deducing a conclusion from major and minor premises.

By far the least well codified type of reasoning is abduction — the process of generating plausible causes for an observed or hypothesized effect (sometimes called "inference to the best explanation").

To complete the analogy, we use abduction to generate forecasting questions, which often ask us to estimate the probability of that a hypothesized future effect will occur within a specified time frame.

To develop our estimate, we use abduction to generate plausible causal hypotheses of how the effect could occur. We then use deduction to identify high value evidence (i.e., indicators) we would be very likely to observe (or not observe) if the causal hypothesis were true (or false).

After seeking to collect or observe the evidence for and against various hypotheses, we use induction to weigh it and reach a conclusion — which in this example takes the form of our probability estimate.

So far, so good. But this begs the question of what guides our adductive reasoning. Taking Judea Pearl's hierarchy of different types of reasoning, we can identify three approaches to generating forecasting questions.

Pearl's lowest level of reasoning is associational (also sometimes called correlational). Basically, if a set of factors (e.g., observable evidence) existed in the past at the same time as a given effect, we assume that the effect will also occur in the future if the same set of factors exist or occur. Note that there is no assumption here of causation; only statistical association.

Simple historical reasoning provides an example of this; given an important effect that occurs multiple times, we can seek factors that are common to the given cases to use to formulate forecasting questions related to the potential for the same effect to occur in the future. To be sure, this is an imperfect approach, because the complex systems that produce observed historical outcomes are themselves constantly adapting and evolving. It is for this reason that it is often said that while history seldom repeats, it often rhymes.

Pearl's next highest level of cognition explicitly creates mental models or more complex theories that logically link causes to effects. These theories can result from both qualitative and quantitative analysis. As noted above, we can use deduction to predict the future effects, assuming a given theory is true and specific causes occur. Hence, different causal theories (usually put forth by different experts) can be used to formulate forecasting questions.

Pearl's highest level of cognition is counterfactual reasoning (e.g., "if I hadn't done that, this wouldn't have happened" or "if I had done that instead, it would have produced this result"). One way to use counterfactual reasoning to generate forecasting questions is via the pre-mortem technique, in which you assume a plan has failed or a forecast has been badly wrong, and ask why this happened, including the evidence you missed and what you could have done differently. The results of pre-mortem analyses are often a rich source of new forecasting questions.

In sum, avoiding strategic failure is as much about taking time to formulate the right forecasting questions as it is using methods to enhance the accuracy with which they are answered.



Comments

New Research: "Is Belief Superiority Justified by Superior Knowledge?"

The title of this post is taken from a recently published research paper by Michael Hall and Kaitlin Raimi. The authors observe that it is not hard to find people who exhibit "the belief that their own views are more correct than other viewpoints" or "belief superiority." They also note a strong positive correlation between "belief superiority" and the degree of confidence that they are correct. An important distinction is that belief superiority results from a comparative judgment, while the latter reflects the strengths of one's convictions.

The focus of Hall and Raimi's research was the extent to which individuals' views about the superiority of their beliefs was justified. As they note, "for a belief to be superior — or more correct — than other beliefs, it should have a superior basis in relevant factual information. Following this logic, belief-superior individuals should possess more accurate knowledge than their more modest peers, or at least better recognize relevant facts when presented with them."

When confronted with this research question, and calling to mind people they have encountered with "belief superiority complex", most people reading this probably have a strong intuition about what the authors research found.

"Belief superior people exhibited the greatest gaps between their perceived and actual knowledge."

However, Hall and Raimi go on to note that, "even if belief superiority is not supported by superior knowledge, belief superiority could be justified by another process: Superior knowledge acquisition. That is, they may seek out information on a topic in an even-handed manner that exposes them to a diversity of viewpoints. As a result, their belief superiority may reflect a reasoned conclusion after comparing multiple viewpoints."

Unsurprisingly, that is not what the authors found. Instead, "belief superior people were most likely to exhibit a preference for information that supported their pre-existing views."

In sum, "belief superior people are not only the least likely to recognize their own knowledge shortcomings, but also the least likely to remedy them."

In our research into the root causes of corporate failures, we have frequently noted that organizational failures to anticipate threats, accurately assess them, and adapt to them in time are driven by fundamental individual and group cognitive and emotional factors that are extremely difficult to change (because, for centuries in our past they were beneficial in the evolutionary sense, and provided an advantage when it came to survival, resource acquisition, and mating).

Hall and Raimi's research findings are yet another example of the deeply rooted individual factors (which are frequently reinforced by group processes) that are the deepest root causes of corporate failure.

As we repeatedly emphasize, the chances of altering these factors through training or incentives are somewhere between slim and none. Instead, organizations' best hope for survival rests on designing processes, systems, and structures that deliberately seek to offset individual and group factors' predictably negative effects.

Comments