How to Avoid Intelligence Analysis Errors

This week, Mike Morrell had former senior CIA executive Martin Peterson on his Intelligence Matters podcast, which is well worth listening to.

Petersen succinctly summarized the lessons he'd learned over a career (and taught to new analysts) about the root causes of many intelligence analysis errors. They apply equally well to many situations outside the world of intelligence, where analysts and decisions makers must make sense of highly uncertain situations.

Petersen highlighted three classic types of error, and the questions to ask to avoid them.

(1) You don't have a good understanding of the organization you're trying to analyze.

  • How do you get to the top in this organization?
  • What is the organization's preferred method of exercising power and making decisions?
  • What are acceptable and unacceptable uses of power in this organization?

(2) You don't have a good understanding of the individuals making decisions.

  • How do they assess the current situation?
  • How do they see their options?
  • What is their tolerance for risk, under the current circumstances?
  • What do they believe about your capabilities, intentions, and will?
  • What is their definition of an acceptable outcome?

(3) You don't understand your own analysis.

  • Rather than asking someone how confident they are in their analysis, ask them where their analysis is most vulnerable to error. This is the same approach as the one we use, which is identifying the most uncertain assumptions in an analysis, and the implications of different outcomes for them. The underlying issues are also surfaced by use of pre-mortems, which we also recommend.
  • What are you not seeing that you should be seeing if your hypothesis/theory/line of analysis is correct? As Sherlock Holmes (and Thomas Bayes) both teach us, sometimes the dog that doesn't bark provides the most important evidence.
  • Petersen emphasized that if you ever catch yourself saying, 'It makes no sense for them to do that', it is a clear warning sign that you either don't understand either the organization and/or the decision makers that are the target of your analysis.

All important lessons to keep in mind as you try to make sense of the complex, highly uncertain, and fast changing situations that abound in the world we face today.

Comments

Using Narratives to Make Sense of High Uncertainty

The arrival of the COVID19 pandemic has made the distinction between risk and uncertainty painfully clear. In the case of the former, the range of possible future outcomes is known, as are their probabilities, and potential impact.

In the case of uncertainty, some or all of these are unknown.

We have many tools available to help us make good decisions in the face of risk. While the crises occasionally remind us that these tools aren’t perfect (e.g., Long Term Capital Management in 1998, the housing collapse in 2008, and COVID19 in 2020), most of the time they serve us well.

But that is not true when we confront highly uncertain systems and situations.

To be sure, our first reaction is to try to adopt our risk tools to help us in these situations. In place of probabilities based on the historical frequencies at which common events (e.g., car accidents) occur, we take the Bayesian approach, and substitute probabilities signifying our degree of belief that historically rare or unprecedented events will occur in the future.

The four years I spend on the Good Judgment Project reminded me once again of the benefits (e.g., superior forecast accuracy) that arise from the disciplined application of Bayesian methodology.

Yet over a forty-year career of making decisions in the face of uncertainty, I’ve also seen that Bayesian methods can sometimes creates a false – and dangerous – sense of security about the precision of our knowledge, leading to overconfidence and poor decisions.

What we need is broader mix of methods for analyzing and making good decisions under conditions of uncertainty (and ignorance, or “unknown unknowns”), not just risk.

With this in mind, I was excited to read a new paper, “The Role Of Narrative In Collaborative Reasoning And Intelligence Analysis: A Case
Study
”, by Saletta et al, which significantly adds to our understanding of the role of narratives and how we construct them to help us make sense of highly uncertain situations

At Britten Coyne Partners, we have written a lot about the important, if too little understood, role that narrative plays in both individual and collective sensemaking and decision making under uncertainty.

Researchers have found that when uncertainty rises, evolution has primed human beings to become much more prone to conformity and to rely more on imitating what others are doing (so-called "social learning" or “social copying”).

Paradoxically, as uncertainty increases, people are more likely to become attracted to a smaller (not larger) number of competing narratives that explain the past and present and sometimes predict possible future outcomes.

In other words, as uncertainty increases the conventional wisdom grows stronger, even as it is becoming more fragile and downside risks are rising.

Today's hyperconnected socio-technical systems — including financial markets — are therefore more vulnerable than ever before to small changes in information that trigger feelings (especially fear) and behavior that spread quickly, and are then further amplified by algorithms of various types. The increasing result is sudden, non-linear change.

In recent years, the role of narratives in economic cycle and financial markets has increasingly been a subject of academic inquiry (e.g., “Narrative Economics”, by Bob Shiller; “Constructing Conviction through Action and Narrative: How Money Managers Manage Uncertainty and the Consequences for Financial Market Functioning”, by Chong and Tuckett; and “News and Narratives in Financial Systems: Exploiting Big Data for Systemic Risk Assessment”, by Nyman et al).

While there are many definitions of “narrative” (and synonyms for it, like “analytical line”, and sometimes “mental models”), most have some common elements, including descriptions of context and key characters, actions and events that move the narrative forward through space and time, and causal links to outcomes and various types of consequences (e.g., cognitive, affective, and/or physical).

Saletta and his co-authors take our understanding of narrative from the macro to the micro level, and describe how, “individuals and teams use narrative to solve the kinds of complex problems organizations and intelligence agencies face daily.”

They “observed that team members generated “micro-narratives”, which provided a means for testing, assessing and weighing alternative hypotheses through mental simulation in the context of collaborative reasoning…

“Micro-narratives are not fully developed narratives; [instead] they are incomplete stories or scenarios…that emerge in an unstructured manner in the course of a team’s collaborative reasoning and problem solving...

[Micronarratives] “serve as basic units that individuals and teams can debate, deliberate upon, and discuss in an iterative process in which micro-narratives are generated and weighed against each other for plausibility with regard to evidence, general knowledge about the world, and fit with other micro-narratives. They can then be organized and assembled into a larger, more developed narrative…

The authors document that the intelligence analysts they studied “ran mental simulations to reason about evidence in a complex problem... [and] test and evaluate many diverse, interacting and often competing micro-narratives that were generated collaboratively with other team members...They tested the plausibility of these micro-narratives against each other, what they knew, and their best estimates (or guesses) for what they didn’t know”…

“In a non-linear and iterative process, the analysts used the insights developed in the process of generating and evaluating micro-narratives to develop a macro-level narrative.”

The authors conclude that, “narrative thought processes play an important role in complex collaborative problem-solving and reasoning with evidence…This is contrary to a widespread perception that narrative thinking is fundamentally distinct from formal, logical reasoning.”

This is also a very accurate description of the team forecasting process that I experienced during my years on the Good Judgment Project.

At Britten Coyne Partners, we also stress that this basic process of collaborative sensemaking in highly uncertain systems and situations can be further enhanced through the use of three complementary processes.

The first is structuring sensemaking processes around the three critical questions first described by Mica Endsley twenty-five years ago:

(1) What are the key elements (e.g., characters, events, etc.) in the system or situation you are assessing, over the time horizon you are using?

(2) What are the most important ways in which these elements are related to each other (e.g., causal connections and positive feedback loops that give rise to non-linear effects)?

(3) Given the interaction of the critical trends and uncertainties you have identified, how could the system/situation evolve in the future, either on its own or in response to actions you and/or other players could take?

The second is using explicit processes to offset what Daniel Kahneman has called the WYSIATI phenomenon (“What You See Is All There Is”), or our natural tendency to reach conclusions only on the basis of the information that is readily at hand. As Sherlock Holmes highlighted in “The Hound of the Baskervilles”, it is often the dog that doesn’t bark that provides the most important clue.

In “Superforecasting”, Professor Phil Tetlock showed how the WYSIATI problem can be overcome (and forecast accuracy improved) by combining the information at hand with longer term, “base rate” or “reference case” data.

Another approach is Gary Klein’s “pre-mortem” technique. Tell your team to assume it is some point in the future and their assessment or forecast has turned out to be wrong. Ask them to write down the information they missed or misinterpreted, including important information that was absent.

A final technique is to have your team assume that their original evaluation of the evidence was wrong, and to generate alternative assessments of what it could mean.

The third process is Marven Cohen’s critiquing method, which focuses on finding and resolving three problems that reduce the reliability of macro narratives (e.g., more formal scenarios). Incomplete narratives are either missing key narrative elements or represent them using assumptions rather than direct evidence. The second problem is the use of assumptions to explain away conflicts between available evidence. And the third problem is the use of doubtful assumptions that have weak evidential support.

In the post-COVID19 world, the ability to make sense of unprecedented uncertainty and maintain ongoing situation awareness under rapidly evolving conditions will be a hallmark of high performance teams.

Unfortunately, this is not a capability that has been developed in the course of many leaders’ previous training and experience. Mastering new tools and processes, like the use of narrative, is critical to organization’s future success.



These and other processes and tools for making good decisions in the face of unprecedented uncertainty are covered in our new online course, leading to a Certified Competence in Strategic Risk Governance and Management. You can learn more about it at the Strategic Risk Institute LLC (an affiliated of Britten Coyne Partners and Index Investor LLC).

A 50% discount on the launch price is offered for the first 100 subscribers. Enter the code 50PERCENTPOUND (for payment in pounds) or 50PERCENTDOLLAR (for payment in dollars) on the payment page.



Comments

Britten Coyne Announces New Online Version of their Strategic Risk Governance and Managment Certified Competence Course

Britten Coyne Partners and their affiliate, the Strategic Risk Institute LLC have launched an on-line programme leading to a Certified Competence in Strategic Risk Governance and Management.

This 12 module on-line course draws upon Britten Coyne Partners’ five years’ research and leading edge thinking. It qualifies for 40 hours of CPD.

“Overall I have enjoyed the material enormously and would not hesitate to recommend the course to anybody with a serious interest in this critical subject.” UK Chartered Director

"To undertake this learning whilst my company was hit by the Covid-19 pandemic gave the course an unnerving backdrop. But as well as discomfort I had an increasing array of tools and thinking at my disposal to use in a real life situation. The test of any tool is how it performs under pressure and this course delivered.” UK Chartered Director

"I graduated with a degree in history, and took this course to increase my appeal to employers. I found it to be rigorous, engaging, and deeply relevant to my future." Recent University Graduate


******* Enrollment now open *******

******* 50% “Early Bird” discount *******

A 50% discount on the launch price is offered for the first 100 students.

Go to the Strategic Risk Institute website for more information about the course and the enrollment link. Enter the code 50PERCENTPOUND (for payment in pounds) or 50PERCENTDOLLAR (for payment in dollars) on the payment page.
Comments

A Deeper Look at Individual Reactions to the COVID-19 Surprise

Once the immediate challenges posed by the global COVID-19 pandemic have been met, as they surely shall, a great deal of research will focus on this question: Why did it seem to take so many people, companies, and governments by surprise?

In our work with clients, we stress that strategic disasters results from some combination of three failures: to anticipate a threat, to accurately assess its potential impact, and to adapt to it in time.

The root causes of each of these failures lie in a complex mix of interacting individual, group, network, and organizational factors. In this post, we’ll take a closer look at recent research into one of these: the ways human beings react to surprise.

Some researchers distinguish between two types of surprise, which they found to activate different areas of the brain (see “Information Theoretic Characterization of Uncertainty” by Loued-Khenissi and Preuschoff).

“Stimulus Surprise” is triggered by something that (a) is not consistent with our expectations; and (b) whose potential impact on us or on people we care about is substantially negative. This type of surprise is an emotional alarm signal that automatically triggers our “System 1” “fight or flight” response.

In contrast, “Bayesian Surprise” is, at first glance, a conscious, “System 2” cognitive response that triggers reflection and learning to improve the accuracy of our mental models and the expectations they produce.

However, in “Neural Mechanisms of Updating Under Reducible and Irreducible Uncertainty”, Kobayashi and Hsu find that brain regions associated with learning automatically increase their activity only in the presence of reducible uncertainty. So here too there is an automatic aspect to our response to surprise.

In “Evidence for Surprise Minimization and Value Maximization in Choice Behavior”, Schwartenbeck et al note that, “classical economic models are predicated on the idea that the ultimate aim of choice is to maximize utility or reward. In contrast, an alternative perspective highlights the fact that adaptive behavior requires agents’ to model their environment and minimize surprise about the states they frequent.” The authors present evidence that “choice behavior can be more accurately accounted for by surprise minimization compared to reward or utility maximization alone.”

A considerable amount of research has identified numerous obstacles to our ability to accurately update our mental models of the world (e.g., our natural human tendencies toward over-optimism, overconfidence, hindsight bias, and, especially when uncertainty is high, conformity to the views of our group or a dominant leader). New research has added to this list.

In “All Thinking is Wishful Thinking”, Kruglanski begin by succinctly describing the ideal updating process: “Basically, we construct new beliefs from prior beliefs by assimilating new evidence. We do so through an inference process probabilistically modeled by Bayesian principles. According to that portrayal, relevant evidence (to which we are exposed) occasions an updating of our beliefs on the topic."

"In Bayesian belief updating, two components are crucial: (i) the strength of the prior belief; namely, the subjective probability of it being true; and (ii) the cogency of the new evidence; namely, the degree to which it strengthens or weakens prior beliefs. In other words, people update their prior beliefs given new evidence, depending on whether the new evidence is perceived as precise, strong, and relevant (versus imprecise, weak, and irrelevant) and whether their prior belief was held with high (versus low) confidence. The change in prior beliefs, in light of the new evidence, is quantified by the degree of “informational gain” or Bayesian surprise.”

However, the authors go on to present evidence that “the belief updating process is suffused by motivation; people actively seek to obtain, avoid, or create new information about the world to increase the consistency between their [existing] models and the evidence at hand.”

In “Valuation of Knowledge and Ignorance in Mesolimbic Reward Circuitry”, Charpentier et al study the activation of various parts of the brain after positive and negative surprises. They find that humans “pursue opportunities to gain knowledge about favorable outcomes but not unfavorable ones…We choose ignorance about future undesirable outcomes more often than desirable ones.”

In “Evidence Accumulation is Biased by Motivation”, Gesiarz et al reach a similar conclusion: People tend to gather information before making judgments. As information is often unlimited a decision has to be made as to when the data is sufficient to reach a conclusion. Here, we show that the decision to stop gathering data is influenced by whether the data points towards a desirable or undesirable conclusion…The motivation to hold a certain belief decreases the need for supporting evidence.”

At this point, many people reading this will be nodding their head in painful recognition of the authors’ conclusion.

Who has not at some point found themselves in a meeting where a surprising result or new piece of information was either dismissed as an anomaly not worth exploring, or when the potential implications of the surprise created so much cognitive dissonance (and/or political risk for some people in the room) that they were dismissed as implausible (which is not the same as impossible).

These situations always bring to mind the conclusion reached by a 1983 CIA study of failed forecasts: "Each involved historical discontinuity, and, in the early stages…unlikely outcomes. The basic problem was…situations in which trend continuity and precedent were of marginal, if not counterproductive value” (“Report on the Study of Intelligence Judgments Preceding Significant Historical Failures”).

That is why being alert to surprises, and committed to discovering the meaning of the high value information they contain, is one of the hallmarks of high reliability organizations.

These new research findings provide different lenses we can use to better understand the initial reactions we observed to the increasing flow of news about the appearance of a new coronavirus in Wuhan, and then its exponential spread around the globe.

For many (perhaps most) people, their initial assessment of early news items about a novel coronavirus was strongly affected by motivated cognition, and a desire to avoid collecting information about the potential negative consequences of the new virus. This would have particularly been the case if they held strong prior views based on memories that the successful containment of both the 2003 SARS coronavirus outbreak and the 2012 MERS coronavirus outbreak.

But between January 23rd (when Wuhan was locked down) and March 8th (when the quarantine of Lombardy was announced), people around the world experienced a “Stimulus Surprise” which initially produced a sharp spike in uncertainty and anxiety, leading to increased social copying like the panic buying of toilet paper, masks, hand sanitizer, and other supplies.

This was followed by the still ongoing “Bayesian Surprise” phase, in which people have struggled to sort through an exponentially increasing flood of information (of varying value and credibility), to gain some measure of situation awareness as the first step in formulating expectations about possible future scenarios and designing a “go forward” plan.

For some (perhaps most) people, this process was undoubtedly complicated by the cognitive bias Daniel Kahneman has called “What You See Is All There Is” or WYSIATI. This refers to our tendency to reach quick intuitive judgments using a narrative constructed solely on the basis of information that is in front of us, and rather than a more deliberate approach that takes into account a wider range of unresolved uncertainties and the different future scenarios they imply.

For other people, this sensemaking process has been a more systematic struggle to develop a better understanding of the complex trends and uncertainties driving the evolving COVID-19 situation, their relationships to each other, and the potential future outcomes they could create.

Regardless of the approach used, reducing the unprecedented level of uncertainty created by COVID-19 continues to be a slow process, with progress only made at a high mental and emotional cost as we work through the surprises we have experienced.
Comments

A Damning New Report on the State of Risk Oversight Before COVID19

The AICPA and the Poole College of Management at North Carolina State University have just released their 2020 "State of Risk Oversight" report, based on data collected from over 500 organizations in the fall of 2019.

It paints a damning picture of the situation that prevailed just before the arrival of the COVID19 pandemic.

Here are some of the report's key findings:
  • "Most respondents believe the volume and complexity of risks is increasing extensively over time."
  • "Respondents noted that a number of external parties were pressuring senior executives for more extensive information about risks."
  • "Strong risk management processes are becoming an expected best practice."
  • "Few executives described their organization's risk management process as mature."
  • "Only about half of respondent organizations engage in formal risk identification and risk assessment processes."
  • "The process used to generate risk reports to the board is often ad hoc."
  • "Only 24% of the organizations’ board of directors substantively discuss top risk exposures in a formal manner when they discuss the organization’s strategic plan."
  • "Most boards of large organizations (84%) or public companies (91%) discuss written reports about top risks at least annually; however, just 60% of those describe the underlying risk management process as systematic or repeatable."
  • "Less than 20% of organizations view their risk management process as providing important strategic advantage."
  • "Many executive teams and boards of directors are now realizing the implications of being ill-prepared to manage the multitude of enterprise-wide risks triggered by such a large scale root cause event of the magnitude of the evolving COVID-19 crisis."






Comments