ASIC's Warning on Director Oversight of Non-Financial Risks

Last week the Corporate Governance Task Force of the Australian Securities & Investments Commission (ASIC) published the first report of its Corporate Governance Task Force, on Director and Officer Oversight of Non-Financial Risks.

It's conclusions were both depressing and a clear warning to board directors around the world.

"Many directors identified challenges with overseeing non‑financial risks in large, complex organisations. Nevertheless, there was no strong, corresponding trend of directors actively seeking out adequate data or reporting that measured or informed them of their overall exposure to non‑financial risks. Fractured or informal flow of information up to the board and around the board table meant that some boards did not always have the right information to make fully informed decisions. Where information did make its way to the board, there was little evidence in the minutes of some organisations of substantive active engagement by directors…"


"We also observed that companies often had frameworks and structures in place to support board oversight of non‑financial risk; however, in practice, deficiencies arose in compliance with, or execution of, these frameworks. For example, boards approved risk appetites that were intended to articulate the level of risk acceptable for company operations, but management operated outside this appetite for years at a time with the board’s tacit acceptance…"

"We reviewed information flows from management to the board and from board committees to full boards. Our review found that material information about non‑financial risk was often buried in dense, voluminous board packs – boards did not own or control the information flows from management to the board to ensure material information was brought to their attention. Also, management reporting often did not identify a clear hierarchy or prioritisation for non‑financial risks."

You can download the full report and its associated materials here.
Comments

Risk Through the Lens of Strategy, Management, and Leadership

In the process of evolution that has driven human progress across the ages, there three core performance metrics that apply to all organisms and organizations:

  • Effectiveness: Achieving the goals that are required to ensure continued survival.

  • Efficiency: Achieving those goals while using as few resources as possible.

  • Adaptability: The extent to which effectiveness and efficiency are maintained in the face of changing external conditions.

There are inescapable trade-offs between these three metrics. For example, maximizing efficiency often eliminates the slack resources that are critical to adaptability.

The three major “direction setting” functions in modern organizations generally align with these three evolutionary metrics.

Strategy’s core challenge is effectiveness. Management’s is efficiency. And leadership’s is adaptability.

In this post, I’ll take a closer look at risk through each of these three lenses.

We define strategy as “a causal theory of success that exploits one or more decisive asymmetries to achieve an organization's most important goals with limited resources, in the face of uncertainty, constraints, and opposition.” If they are achieved, these goals will, at minimum, enable the organization to survive as independent entity, and ideally generate substantial value for its stakeholders.

In this context, critical risks include failing to accurately anticipate and assess emerging threats and opportunities in the external environment, setting the wrong goals, and pursuing a strategy for achieving them that doesn’t work.

The core challenge of management is translating the organization’s strategy into detailed objectives, and devising plans, processes, systems, budgets and organization structures that enable objectives to be met with scarce resources. Management also involves monitoring implementation of these plans and the evaluation of their results.

Risk in this context is well-captured by the concept of enterprise risk management, which seeks to identify, quantify, and integrate an organization’s exposure to a range of hazard, operational, and financial risks.

The fundamental challenges of leadership include aligning employees and other stakeholders around an organization’s purpose and strategic agenda; recruiting talented employees and developing them into a high-performance team; and facilitating the organization’s continuing adaptation to its changing competitive environment.

Risk in this context is, in my experience, often the most critical and also the most overlooked because of its ambiguous nature. The best strategy and most competent management in the world are all-too-easily undermined by poor leadership and a toxic culture. More importantly, metastasizing leadership and culture risk are root causes that usually precede the appearance of management and strategic risk.

For these reasons, it is critical for board directors to constantly look for signs of emerging leadership and cultural risks. While there are multiple warning indicators to monitor, over the years I’ve found two to be particularly useful. The first is the extent of blaming that you observe in an organization. The second is how difficult decisions and high consequence uncertainties are discussed. Is there a healthy exchange of different views? How is conflict managed, or is it absent?

More broadly, sustained leadership emerges from the interaction of integrity, competence, and empathy. Leadership risk is present whenever one or more of these is missing.

In sum, risk looks very different when viewed through the lens of strategy, management, or leadership. All of them are critical, and successfully addressing their different risks requires multiple approaches and skills.

At
Britten Coyne Partners, our focus is on strategic and leadership risk. We offer both education and consulting services to help our clients meet these challenges.

Comments

The Critical Importance of Anticipatory Intelligence in Our Complex, Uncertain World

The deceptive economic and geopolitical calm of the past decade has been an aberration, brought about by unprecedented global monetary stimulus to hold at bay the deflationary forces that have been building in the global economy. Thanks to central bankers’ efforts, volatility has remained low, and organizations have not had to worry too much about disruptive risks beyond those posed by rapid technological change. That is about to change: Brexit, the election of Donald Trump, the emergence of a new US-China Cold War, and nearly two trillion dollars of sovereign bonds bearing negative interest rates are early indications that we are entering a period of much higher uncertainty.

With this change will come much greater organizational focus on developing the processes, methods, tools, and skills needed to survive and thrive in a much more dangerous environment. Josh Kerbel, a faculty member at the United States’ National Intelligence University, recently published an article that we hope will have a substantial impact on these efforts, and closely reflects our views at Britten Coyne Partners.

In “Coming to Terms with Anticipatory Intelligence”, Kerbel notes that it is “a relatively new type of intelligence that is distinct from the “strategic intelligence” that the intelligence community has traditionally focused on. It was born from recognition that the spiking global complexity (interconnectivity and interdependence, both virtual and physical) that characterizes the post–Cold War security environment, with its proclivity to generate emergent (non-additive or nonlinear) phenomena, is essentially new. And as such, it demands new approaches.”

“More precisely, this new strategic environment means that it is no longer enough for the intelligence community to just do traditional strategic intelligence: locking onto, drilling down on, and — less frequently — forecasting the future of issues once they’ve emerged. While still important, such an approach will increasingly be too late. Rather, the intelligence community should also learn to practice foresight (which is not the same as forecasting) and imagine or envision possibilities before they emerge. In other words, it should learn to anticipate.”

Kerbel echoes longstanding concerns among some members of the intelligence community. For example, a 1983 CIA analysis of failed intelligence estimates noted that, “each involved historical discontinuity, and, in the early stages...unlikely outcomes. The basic problem was...situations in which trend continuity and precedent were of marginal, if not counterproductive value."

This distinction was also brought home to me during the four years I spend on the Good Judgement Project, which demonstrated that forecasting skills could be significantly improved through the use of a mix of techniques. But hiding in the background was an equally important question: What was the source of the questions whose outcome we were forecasting? One of my key takeaways was that anticipatory thinking – posing the right questions – was just as important to successful policy and action as accurately forecasting their outcome.

Kerbel notes that, “as clear and compelling as the case for anticipatory intelligence is, it remains poorly understood… Since the 1990s, increasing complexity has been an issue that many in the intelligence community have impulsively dismissed or discounted. Their refrain echoes: “But the world has always been complex.” That’s true. However, what they fail to understand is that the closed and discrete character of the Soviet Union and the bipolar nature of the Cold War — the intelligence community’s formative experience — eclipsed much of the world’s complexity and effectively rendered America’s strategic challenge merely complicated (no, they’re not the same). Consequently, the intelligence community’s prevailing habits, processes, mindsets, etc. — as exemplified in the traditional practice of strategic intelligence — are simply incompatible with the challenges posed by the exponentially more complex post-Cold War strategic environment.”

Kerbel’s view is that “Fundamentally, anticipatory intelligence is about the anticipation of emergence… Truly emergent issues are fundamentally new — nonlinear — behaviors that result unpredictably but not unforeseeably from micro-behaviors in highly complex (interconnected and interdependent) systems, such as the post–Cold War strategic environment. Although emergence can seemingly happen quite quickly (hence the need to anticipate), the conditions enabling it are often building for some time — just waiting for the “spark.” It is these conditions and what they are potentially “ripe” for — not the spark — that anticipatory intelligence should seek to understand… Foresight involves imagining how a broad set of possible conditions (trends, actors, developments, behaviors, etc.) might interact and generate emergent outcomes.”

This begs the question of which foresight methods and tools are most effective. We go into great detail about this in our Strategic Risk Governance and Management course. In this blog post we’ll highlight four key insights.

Traditional scenario methodologies often disappoint

  • As a general rule, when reasoning from the present to the future, we naturally (to maintain our sense psychological safety) minimize the extent of change that could occur.

  • In complex systems, it is almost always impossible to reduce the forces that could produce non-linear change to just two critical uncertainties, as is done in the familiar “2 x 2” scenario method. And in some cases, the uncertainties that most worry an organization’s senior leaders are either out of bounds for the scenario construction team, or the range of their possible outcomes is deliberately constrained.

  • I first studied the scenario methodology under Shell’s Pierre Wack back in 1983. In its early applications, this approach was often able to fulfill its goal of changing senior leaders’ perceptions. Over the years, however, I have seen what I call “scenario archetypes” become more common, which has weakened their ability to surprise leaders and change their perceptions. These archetypes result from one critical uncertainty being technological in nature, and the other being one whose negative outcome would be very bad indeed. This gives rise to three archetypes: (1) Business pretty much as usual, with current trends linearly extrapolated (this is usually the scenario that explicitly or implicitly underlies the organization’s strategy); (2) The World Goes To Hell (slow technology change and the negative outcome for the other uncertainty); and (3) Technology Saves the Day (fast technology change overcomes the negative outcome of the other uncertainty). This leaves what is usually the least well defined but potentially most important scenario, where technology rapidly develops, but the other uncertainty does not have the negative outcome. Too many organizations fail to fully explore the implications of this scenario, usually because they are more realistically threatening to the current strategy.
Historical analogies are limited by our knowledge of history

  • Whether the subject is political economic, technological, business, or military history, most of us have studied too little of it to have a rich based of historical analogies from which we can draw while trying to anticipate the future.

  • Consider some of the challenge we face at the present, including the transition from an industrial to an information and knowledge-based economy; the rapid improvement in potential “general purpose” technologies like automation and artificial intelligence; and the potential transition of the global political economy from a period of growing disorder and conflict to period of more ordered conflict due to a new Cold War between the US and China. In all these cases, the most relevant historical analogies may lie further in the past than many people realize.
Prospective hindsight – reasoning from the future to the present – is surprisingly effective

  • Research has shown that when we are given future event, told that it is true, and asked to explain how it happened, our causal reasoning is much more detailed than if we are simply asked, in the present, how this future event might happen.

  • However, that still leaves the “creative” or “imaginative” challenge of conceiving of these potential future events. We have found that starting with broad future outcomes – e.g., our company has failed; China has successfully forced the US from East Asia – generates a richer set of alternative narratives than a narrower focus on specific future events.
Explicitly focusing on system interactions helps identify emergent effects and early warning indicators

  • Quantitatively, agent-based models, which enable complex interactions between different types of agents, can produce surprising emergent effects, and, critically, help you to understand why they occur (which can aid in either their prediction or in designing interventions to promote or avoid them).

  • Qualitatively, we have found it very useful to create traditional scenarios in narrower policy areas (e.g., technology, the economy, national security, etc.) and then explicitly trace and assess overall system dynamics and how different scenario outcomes could interact across time and across policy areas (e.g., technology change often precedes economic and national security change) to produce varying emergent effects.

Kerbel concludes by noting that, “Exponentially increasing global complexity is the defining characteristic of the age.” Because of this, effective anticipatory intelligence capabilities are more important than ever before to organizations’ future survival and success – and more challenging to develop.
Comments

The Emerging Impact of Artificial Intelligence on Strategic Risk Management and Governance: A New Indicator

Britten Coyne Partners provides consulting and educational services that enable clients to substantially improve their ability to anticipate, accurately assess, and adapt in time to emerging threats to the success of their strategies and survival of their organizations.

Among the trends we obsessively monitor is progress in artificial intelligence technologies that could change the way clients approach these challenges.

We recently read a newly published paper that directly addressed this issue.

Before discussing the paper’s findings, it will be useful to provide some important background.

While recent advances in artificial intelligence in general and machine learning in particular have received extensive publicity, the limitations of AI technologies are far less well-known, but equally important. As described in his book (“The Book of Why”), Professor Judea Pearl’s “hierarchy of reasoning” provides an excellent way to approach this issue.

Pearl divides reasoning into three increasingly difficult levels. The lowest level is what he calls
“associative” or statistical reasoning, whose goal is finding relationships in a set of data that enable prediction. A simple example of this would be creation of a linear correlation matrix for 100 data series. Associative reasoning makes no causal claims (remember the old saying, “correlation does not mean causation”). Machine Learning’s achievements thus far have been based on various (and often very complex) types of associative reasoning.

And even at this level of reasoning, there are many circumstances in which machine learning methods struggle and often fail. First, if a data set has been generated by a random underlying process, then any patterns ML identifies in it will be spurious and unlikely to consistently produce accurate predictions (a mistake that human researchers also make…).

Second, if a data set has been generated by a so-called “non-stationary” process (i.e., a data-generating process that is evolving over time), then the accuracy of predictions are likely to decline over time as the historical training data bears less and less resemblance to the data currently being generated by the system. And most of the systems that involve human beings – so-called complex adaptive systems – are constantly evolving (e.g., as players change their goals, strategies, relationships, and often the rules of the implicit game they are playing).

In contrast, even in the case of very complex games like Go, the underlying system is stationary – e.g., the size of the board, rules governing allowable moves, etc. – do not evolve over time.

Of course, a predictive algorithm can be updated over time with new data; however, this raises two issues: (1) the cost of doing this, relative to the expected benefit, and (2) the respective rates at which the data generating process is evolving and the algorithm is being updated.

Third, machine learning methods can fail if a training data set is either mislabeled (in the case of supervised learning), or has been deliberately corrupted (a new area of cyberwarfare; e.g., see IARPA’s SAILS and TrojAI programs). For example, consider a set of training data that contains a small number of stop signs on which a small yellow square had been placed, linked to a “speed up” result. What will happen when an autonomous vehicle encounters a stop sign on which someone has placed a small square yellow sticker?

In Pearl’s reasoning hierarchy, the level above associative reasoning is
causal reasoning. At this level you don’t just say, “result B is associated with A”, but rather you explain why “effect B has or will result from cause A.”

In simple, stationary mechanical systems governed by unchanging physical laws, causal reasoning is straightforward. When you add in feedback loops, it becomes more difficult. But in complex adaptive systems that include human beings, accurate causal reasoning is extremely challenging, to the point of apparent impossibility in some cases.

For example, consider the difficulty of reasoning causally about history. In trying to explain an observed effect, the historian has to consider situational factors (and their complex interactions), human decisions and actions (and how they are influenced by the availability of information and the effects of social interactions), and the impact of randomness (i.e., good and bad luck). The same challenges confront an intelligence analyst – or active investor – who is trying to forecast probabilities for possible future outcomes that an evolving complex adaptive system could produce.

Today causal reasoning is the frontier of developing machine learning methods. It is extremely challenging for many reasons, including, for example, requirements for substantial improvements in natural language processing, knowledge integration, agent-based modeling of multi-level complex adaptive systems, automated inference of concepts, and their use in transfer learning (applying concepts across domains).

Despite these obstacles, AI researchers are making progress in the area of some areas of causal reasoning (e.g., “
Causal Generative Neural Networks”, by Goudet et al, “A Simple Neural Network Model for Relational Reasoning” by Santoro et al, and “Multimodal Storytelling via Generative Adversarial Imitation Learning” by Chen et al). But they still have a very long way to go.

At the top of Pearl’s hierarchy sits
counterfactual reasoning, which answers questions like, “What would have happened in the past if one or more causal factors had been different?”; “What will happen in the future if assumptions X, Y, and Z aren’t true?”; or “What would happen if a historical causal process has changed?”

One of my favorite examples of counterfactual reasoning comes from the movie Patton, where he has been notified of increased German activity in the Ardennes forest in December 1944, at the beginning of what would become the Battle of the Bulge. Patton says to his aide, “There's absolutely no reason for us to assume the Germans are mounting a major offensive. The weather is awful, their supplies are low, and the German army hasn't mounted a winter offensive since the time of Frederick the Great — therefore I believe that's exactly what they're going to do.”

Associational reasoning would have predicted just the opposite.

This example highlights an important point: in complex adaptive systems, counterfactual reasoning often depends as much on an intuitive grasp of situations and human behavior that we learn from the study of history and literature as it does on the application of more formal methods.

Counterfactual reasoning serves many purposes, including learning lessons from experience (e.g., “what would have worked better?”) and developing and testing our causal hypotheses (e.g., “what is the probability that effect E would have or will occur if hypothesized cause X was/is present or not present?”).

While Dr. Pearl has developed a systematic approach to causal and counterfactual reasoning methods, their application remains a continuing challenge for machine learning methods, and indeed even for human reasoning. For example, the Intelligence Advanced Research Projects Activity recently launched a new initiative to improve counterfactual reasoning methods (the “FOCUS” program).

In addition to the challenge of climbing higher up Pearl’s hierarchy of reasoning, further development and deployment of artificial intelligence technologies faces three further obstacles.

The first is the
hardware on which AI/ML software runs. In many cases, training ML software is more time, labor, and energy intensive than many people realize (e.g., “Neglected Dimensions of AI Progress” by Martinez-Plumed et al, and “Energy and Policy Considerations for Deep Learning in NLP”, by Strubell et al). However, recent evidence that quantum computing technologies are developing at a “super-exponential” rate suggests that this constraint on AI/ML development is likely to be significantly loosened over the next five to seven years (e.g., “A New Law Suggests Quantum Supremacy Could Happen This Year” by Kevin Hartnett). This dramatic increase in processing power that quantum computing will provide could, depending on software development (e.g., agent based modeling and simulation), make it possible to predict the behavior of complex adaptive systems and, using Generative Adversarial Networks (an approach to machine learning that is driven by “self-play” or competing algorithms), devise better strategies for achieving critical goals. Of course, this also raises the prospect of a world in which there are many more instances of “algorithm vs. algorithm” competition, similar to what we see in some financial markets today.

The second challenge is “
explainability”. As previously noted, the statistical relationships that ML identifies in large data sets are often extremely complex, which makes it hard for users to understand and trust the basis for the predictions they make.

This challenge becomes exponentially more difficult in the case of GANs. For example, after DeepMind’s AlphaZero system used GANs to rapidly develop the ability to defeat expert human chess players, the company’s founder, Demis Hassabis, observed that its approach to the game was “like chess from another dimension”, and extremely hard for human players to understand.

Yet other research has shown that human beings are much less likely to trust and act upon algorithmic predictions and decisions whose underlying logic they don’t understand. Thus, the development of “explainable AI” algorithms that can provide a clear causal logic for the predictions or decisions they make is regarded as a critical precondition for broader AI/ML deployment.

If history is a valid guide,
organizational obstacles will present a third challenge to the widespread deployment of ML and other AI technologies. In previous waves of information and communication technology (ICT) development, companies first attempted to insert their ICT investments into existing business processes, usually with the goal of improving their efficiency. The results, as measured by productivity improvement, were usually disappointing.

It wasn’t until changes were made to business processes, organizational structures, and employee knowledge and skills that significant productivity gains were realized. And it was only later that the other benefits of ICT were discovered and implemented, including more effective and adaptable products, services, organizations, and business models.

In the case of machine learning and other artificial intelligence technologies, the same problems seem to be coming up again (e.g., “
The Big Leap Toward AI at Scale” by BCG, and “Driving Impact at Scale from Automation and AI” and “AI Adoption Advances, but Foundational Barriers Remain” by McKinsey). Anybody in doubt about this need only look at the compensation packages companies are offering to recruit data scientists and other AI experts (even though the organizational challenges to implementing and scaling up AI/ML technologies go far beyond talent).

Having provided a general context, let’s now turn to the article that caught our attention: “
Anticipatory Thinking: A Metacognitive Capability”, by Amos-Binks and Dannenhauer that was published on Arxiv on 28 June 2019.

As we do at Britten Coyne Partners, the authors draw a distinction between “anticipatory thinking”, which seeks to identify what could happen in the future, and “forecasting”, which estimates the probability, and/or remaining time until the possible outcomes that have been anticipated will actually happen, and the impact they will have if and when they occur.

With respect to anticipatory thinking, we are acutely conscious of the conclusion reached by a 1983 CIA study of failed forecasts: "each involved historical discontinuity, and, in the early stages…unlikely outcomes. The basic problem was…situations in which trend continuity and precedent were of marginal, if not counterproductive value."
When it comes to forecasting, we know that in complex socio-technical systems that are constantly evolving, forecast accuracy over longer time horizons still heavily depends on causal and counterfactual reasoning by human beings (which, to be sure, can be augmented by technology that can assist us in performing activities such as hypothesis tracking, high value information collection – e.g., Google Alerts – and evidence weighting).

Anticipatory Thinking: A Metacognitive Capability” is a good (if not comprehensive) benchmark for the current state of artificial intelligence technology in this area.

The authors begin with a definition: “anticipatory thinking is a complex cognitive process…that involves the analysis of relevant future states…to identify threatening conditions so one might proactively mitigate and intervene at critical points to avoid catastrophic failure.”

They then clearly state that, “AI systems have yet to adopt this capability. While artificial agents with a metacognitive architecture can formulate their own goals or adapt their plans response to their environment, and learning-driven goal generation can anticipate new goals from past examples, they do not reason prospectively about how their current goals could potentially fail or become unattainable. Expectations have a similar limitation; they represent an agent’s mental view of future states, and are useful for diagnosing plan failure and discrepancies in execution. However, they do not critically examine a plan or goal for potential weaknesses or opportunities in advance…

"At present, artificial agents do not analyze plans and goals to reveal their unnamed risks (such as the possible actions of other agents) and how these risks might be proactively mitigated to avoid execution failures. Calls for the AI community to investigate so-called ‘imagination machines’ [e.g., “
Imagination Machines: A New Challenge for Artificial Intelligence” by Sridhar Mahadevan] highlights the limitations between current data-driven advances in AI and matching complex human performance in the long term.”

The authors’ goal is to “take a step towards imagination machines by operationalizing the concept of automated, software-based anticipatory thinking as a metacognitive capability” and show how it can be implemented using an existing cognitive software architecture for artificial agents used in planning and simulation models.

The authors’ logic is worth describing in detail, as it provides a useful reference:

First, identify goal vulnerabilities. “This step reasons over a plan’s structure to identify properties that would be particularly costly were they not to go according to plan.” They suggest prioritizing vulnerabilities based on how many elements in a plan are based on different “pre-conditions” (i.e., assumptions).

Second, “For each identified plan vulnerability, identify possible sources of failure” – that is, “conditioning events” which would exploit vulnerabilities and cause the plan to fail.

Third, identify modifications to the existing plan that would reduce exposure to the sources of failure.

Finally, prioritize the implementation of these plan modifications based on a resource constraint and each modifications forecast cost/benefit ratio, with the potential benefit measured by the incremental change in the probability of plan success as a result of the modification.

After reading this paper, our key takeaway is that when it comes to strategic risk governance and management, there appears to be a very long way to go before artificial intelligence technology is capable of automating, or even substantially augmenting, human activity.

For example, when the authors suggest “reasoning over a plan’s structure”, it isn’t clear whether they are referring to associational, causal, and/or counterfactual reasoning.

More importantly, plans are far more structured than strategy, and their assessment is therefore potentially much easier to automate.

As we define the term, “
Strategy is a causal theory, based on a set of beliefs, that exploits one or more decisive asymmetries to achieve an organization's most important goals - with limited resources, in the face of evolving uncertainty, constraints, and opposition.” 

There are many potential existential threats to the success of a strategy, and the survival of an organization (including setting the wrong goals). And new threats are constantly emerging.

Given this, for the foreseeable future, complex human cognition will continue to play a critical role in strategic risk management and governance – from anticipating potential threats, to foraging for information about them, to analyzing, weighing, and synthesizing it, and to testing our conclusions against intuition that is grounded in both experience and the instinctive sense of danger that has been bred into us by evolution.

The critical challenge for boards and management teams today isn’t how to better apply artificial intelligence to reduce strategic ignorance, uncertainty, and risk. Rather, it is how to quickly develop far better individual, team, and organizational capabilities to anticipate, accurately assess, and adapt in time to the threats (and opportunities) that are emerging at an accelerating rate from the increasingly complex world we face today.

While at Britten Coyne Partners we will continue to closely track the development of artificial intelligence technologies, our primary focus will remain helping our clients to develop and apply these increasingly critical capabilities throughout their organizations.
Comments

Fraud at Patisserie Valerie -- What Can We Learn?

The emergence of a major fraud at the UK based retail “coffee and cakes” chain Patisserie Valerie led to the failure of the business*, despite desperate efforts of its high-profile major investor and Executive Chairman, Luke Johnson, to fund it before it sunk into administration, including putting in a reported £10m of his own cash as an unsecured loan, now probably all lost.

The aftermath has stimulated a debate about the whether the role and personality of Luke Johnson, considered previously to have been an extremely successful entrepreneur, was a contributing factor; was this a failure of effective governance? Or is fraud just something that no-one, however talented, can guard against?

We suggest it is helpful to examine the issue from the perspective of Strategic Risk and Strategic Warning. In our work on Strategic Risk we emphasise the importance of “foraging for surprise” in the “Realm of Ignorance”, i.e. actively looking for potential strategic or existential threat. In this context, the threat of fraud, in any business, maybe a relatively uncommon occurrence, but it is hardly unknown. It is unwise to fall into the trap of believing that an uncommon occurrence is an impossible one. Trust is necessary in all businesses. At the same time unconditional trust might be considered a step too far.

Efforts to combat deception and fraud have, over time, led to a relatively robust framework of checks and controls in the finance profession. Yet these still inevitably rest on human behaviour and always susceptible to outright deceit by those most trusted to be honest. Also, as we know, audits cannot be relied upon to reliably detect fraud.

So, is fraud, like death and taxes, always with us? Is it a risk about which we can do nothing, despite the potential to bring about the death of the enterprise? We suggest not.

Firstly, boards should accept that the threat from a fraud represents a Strategic Risk that can be anticipated. Secondly, they might ask themselves, how long do companies have to try and recover from a major fraud after it has been uncovered? The answer to this question is, typically, not long enough. Thus, the focus is on what might give us some early warning of a fraud being perpetrated. To put this into the language of Strategic Warning: what could represent high-value indicators of possible fraud?

This is not a simple question to answer – if it was we would presume that fraud would be much less common than is experienced. After all, the fraudsters are, by definition, intent on hiding their activities. At the same time, fraudsters also know that their discovery is probably inevitable in time, (unless their intentions is a “temporary” fraud, which it is expected will be recovered from before detection, such as a sales manager who creates fictitious customer orders in order to meet a target in one month believing, or hoping, that a recovery in actual sales in future periods will allow the fiction to pass unnoticed.) Thus, the challenge is to identify what might be high-value indicators of potential fraud, which also might provide early warning and then to be able to monitor these indicators.

One of our favourite definitions of risk blindness is “familiarity with inferior (or incorrect) information”. Naturally, not all inferior information results from fraud, but it is, we hope, self-evident that fraudsters rely on creating risk blindness through familiarity with incorrect information. Boards rely on information provided to them by the executive and the organization at large. It would be impractical if not impossible to treat all of this as “inferior or incorrect”. At the same time, it is possible to be sensitive to and, from time to time, actively seek information that is contradictory to the standard board pack data. Such information, which may provoke a sense of surprise or suggest uncertainty or be plainly different is, by definition, high-value.

In the case of Patisserie Valerie, there are two examples in the public domain which may illustrate this principle. The first relates to the proximate cause of nearly all corporate failures: running out of cash. It is reported that the company’s cash position was overstated by (£54m or more), and “secret” overdrafts of c. £10m were unknown to the board — i.e. the content of bank accounts were very different from what was being reported. Perhaps it is asking too much for finance directors to occasionally give direct access to a bank statement to the non-executive directors (though, why not?); however Mr. Johnson was an Executive Chairman, so perhaps we may assume he could have done that check himself from time to time. In the event, apparently, he did not, for if he had we must presume that a significant discrepancy would have been apparent much earlier on, perhaps with enough warning to have saved the company.

The second example of high-value information could have been an indicator that the reportedly strong performance of the chain was at odds with the experience of some other branded hospitality chains and, according to a number of customer anecdotes reported in the press subsequently, very much at odds with the customer experience. Some customers have openly questioned whether Mr. Johnson can possibly have visited any of the rapidly expanded number of Patisserie Valerie outlets. They suggest that had he done so he would have seen at first-hand practically empty sites next to other relatively prosperous venues, suggesting that the Patisserie Valerie value proposition was either not as strong as believed or was not being delivered reliably or to a high enough standard.

We do not know and cannot confirm whether these reported experiences were in fact representative of the majority of the Patisserie Valerie estate, but let us assume for the moment that they could have been. We observe that the erosion of a business’ value proposition is itself a Strategic Warning that frequently is a causal factor in declining cash flow. It does not seem impossible that, in view of Mr. Johnson’s previous track record, the board had assumed that the strategy for rapid expansion of the chain was bound to succeed. Individuals in the business, aware of an expectation of success, may have initially sought to “massage” the numbers to soften bad news, similar to the example of the sales manager cited above. Then when the bad news kept coming or worsened, the deception became ingrained. Had “from the field” observations been seen to be at odds with reported sales, this could have been a powerful high-value early warning signal.

This, we know and accept, is speculation. We only indulge in it to illustrate some important principles. The critical importance of high-value indicators for effective Strategic Warning cannot be underestimated. It is a challenge for boards to seek to identify and monitor such indicators, but one that they have an absolute duty to confront. Secondly this case illustrates the key value to boards and directors of “foraging for surprise” and paying heed to the feeling. Surprise is itself an indicator that something is inconsistent with our mental models of reality. Boards and directors need to develop an acute sense of surprise being a clue to the existence of risk blindness.

*Note: Following administration and some closures the Patisserie Valerie business was sold to new owners by administrators and continues to trade.
Comments