Toys R Us' Most Important Lesson for Boards

If you are a parent of a certain age, part of you was probably a bit saddened by the recent demise of Toys R Us, a store where you probably spent a lot of time back in the day.

However, as this once great retailer prepares to go into liquidation, it also offers us valuable reminders about the causes of organizational failure – and how they can be avoided.

External Causes of Failure

In our client education courses, Britten Coyne makes the point that the external trends which give rise to strategic threats often follow a common causal pattern of four phases (albeit with multiple feedback loops between them).

The first is technological change. In the case of Toys R Us the most important included the birth of the internet, the development of online shopping businesses, the penetration of broadband, and the arrival of advanced gaming consoles in 2001, Facebook in 2004, YouTube in 2005, smart phones in 2007, Instagram in 2010, and Snapchat in 2011.

Technological change eventually leads to economic changes.

In the case of the technologies described above, these changes have been jarring. The launch of new business models has sharply increased competition and uncertainty in many industries, including toys. With their margins under increasing downward pressure, companies have had to continuously cut costs, which has left fewer employees with much more to do and less free time for non-work tasks.

Technology changes also triggered repeated shifts in children’s interest away from traditional toys and towards online entertainment, gaming, and social media that did not require a physical distribution network (the exception was toys tied to major movies, like the Star Wars and Marvel franchises).

Economic change eventually produces social changes.

In the case of Toys R Us, critical social changes included time-short parents increasingly turning to online and superstore superstore (e.g., Walmart and Target) shopping, where in one stop they could purchase both grocery and other items, including toys. This further increased the downward pressure on the margins at traditional toy retailers, even as social changes reinforced the falling economic demand for traditional toys.

At the end of this causal chain comes political change.

In the case of Toys R Us, perhaps the most important has been the taxation of online sales. For many years, these were effectively tax free, which, in addition to convenience, created another incentive to avoid purchases at physical toy stores.

Internal Causes of Failure

Our research has found three critical organizational sources of strategic failure: (1) the failure to anticipate new threats; (2) the failure to accuracy assess the dangers they potentially represent, and how fast they could materialize; and (3) the failure to adequately adapt to them in time.

The available evidence suggests that Toys R Us management anticipated the new threats that they faced. It was not as though the environment was not providing clear signals, including the disruption of book selling by Amazon’s arrival, increasing toy sales at Walmart and Target superstores, the closure of many independent toy retailers, and the bankruptcy of FAO Schwartz in 2003.

Whether Toys R Us managers accurately assessed the danger posed by these emerging threats is hard to say, as much of the evidence on this point is located in documents that remain company confidential. However, Toys R Us’ online alliance with Amazon in 2000 suggests that they appreciated the danger posed by at least some of the new threats they faced.

Unfortunately, the history of organizational failure is filled with stories of timely anticipation of new threats and accurate assessments that came to naught because of inappropriate or poorly implemented adaptations, or initiatives that were too long delayed. The public record suggests that this may well have been the case for Toys R Us.

The Amazon alliance was not successful and in 2004 Toys R Us sued Amazon to force its termination. Toys R Us launched its own website in 2006, by which time Amazon’s dominance and growing economies of scope were well-established. Toys R Us also continued to maintain a relatively large number of traditional “big box” stores, often in malls in which many other retailers were failing (which decreased shopper visits).

While media coverage has focused on the firm’s recent bankruptcy and impending liquidation, perhaps the most interesting chapter in the Toys R Us story played out in 2005 and ended with the company being sold to a trio of private equity firms for $6.6 billion, an 8% premium over its stock price.

In our work with clients, we emphasize the critical importance of boards and management teams maintaining their situation awareness of evolving time dynamics – specifically, the relationship between the remaining time before an evolving strategic risk reaches one or more thresholds and becomes existentially dangerous, and the time still required to implement adequate adaptations to it.

One underappreciated aspect of this approach is that it creates the possibility of a situation in which no more “safety margin” is left, and it is clear that an evolving strategic risk will become and existential danger before an adequate response can be implemented.

At this point, the rational choice for a board is to sell or merge the company, to maximize the value of its shareholders’ investment. This approach can be very successful (in hindsight if not always foresight), if it is undertaken while there is still considerable market uncertainty about future developments, and widely varying beliefs about the potential effectiveness of various options for responding to them.

While never an easy choice, cases like Toys R Us can help management teams and boards to better appreciate that it is sometimes the right one to make.


Robust, Resilient, and Adaptive Organizations

We frequently read articles encouraging organizations to be robust, resilient, and adaptive. But what do these three terms mean?

We’ll start by acknowledging that there are multiple definitions out there, as well as articles that seem to use some of these terms interchangeably. However, we’re old school, and believe that careful use of words promotes clear thinking. So with that in mind, we offer the following short explanations of these concepts.

“Robust” strategies and plans are expected to achieve their goals under a wide range of possible future conditions. In practice, robustness arises from three underlying choices:

  • Design of contingency plans for the most important future situations that have been anticipated (recognizing that in a complex adaptive system, it is impossible to anticipate all possible contingencies);

  • Establishment and monitoring of indicators to provide early warning when these new situations are developing; and

  • Establishment of criteria that trigger implementation of contingency plans.

“Resiliency” refers to how much an organization’s performance declines in response to a negative external event, and how long it takes to return to the previous performance level.

Resiliency becomes important when robustness fails.

Resiliency is a function of many interacting factors, including, for example:

  • Redundancy and modular design to contain non-linearities;

  • Maintenance of resource reserves (hyper-efficiency usually comes at the cost of resiliency);

  • Strong organizational capacities for problem detection and problem solving;

  • Structural choices, like the establishment of incident management teams; and

  • Regular crisis simulation and response training.

“Adaptability” refers to an organization’s ability to maintain a high level of performance over time, even as its environment evolves. It is a much broader and more fundamental concept than either robustness or resiliency, both of which it includes.

Adaptability is one of evolution’s three core metrics, along with effectiveness and efficiency. Tradeoffs between them are inescapable. The basic organizational processes that drive adaptability are also well known:

  • Feedback: The collection or provision of information about the extent to which an organization is achieving its critical goals, and the extent to which those goals are sufficient to ensure survival and success in the current environment.

  • Variation: Creation of options (in effect, hypotheses) for changing the current situation

  • Selection:  Piloting options (in effect, experiments), evaluating their initial results, and choosing which to scale up.

  • Retention: Changing organizational processes, systems, structures, and norms to permanently incorporate lessons learned from the most successful pilots.

Finally, while these three concepts are easy to describe, the increasing rate of organizational failure suggests that as the environment has become more complex and uncertain, designing robust plans, building resilient organizations, and maintaining a strong capacity for adaptation have all become more challenging to implement and sustain.


Carrillion: Old Lessons from a New Failure


A new year and new corporate death: once again an organisation employing tens of thousands of people with revenues measured in billions, that, according to its last published corporate accounts, was in rude health has collapsed into bankruptcy and insolvency in indecent haste and to the apparent astonishment of all the wise heads who are expected to know and understand these matters – boards of directors, pension trustees, auditors, financial regulators, and the government itself.

Consider these few facts: in March 2017 the board of directors approved the statutory accounts of Carillion for the year ended 2016. The same accounts, of course, were given a clean bill of health by Carillion auditors KPMG. Somehow, slightly more than three months later, the same board of directors were moved to issue a profit warning and part company with the CEO who, only a short while before, received generous “performance related” bonuses. By the end of September 2017 the board were presenting the results of a “contracts and strategic review” which identified amongst other things the urgent need to dispose of assets and implement cost reduction measures to buttress operational cash flow. The review highlighted the results of accepting contracts where there was a
“high degree of uncertainty around key assumptions”. By January 2018 the board was petitioning Government for an emergency cash bailout. When this was refused the company was placed into insolvency procedures with reported debts of approximately £5 billion and cash reserves of £29 million.

One of our favourite observations is that risk blindness is the result of familiarity with imperfect information. Perhaps the board of Carillion were collectively victims of risk blindness. On the other hand, if reports in the press are true, the board of Carillion has, for some years, been in the practice of raising cash from asset sales to fund both dividend payments and executive compensation. The former, it has been suggested, falls within the definition of paying dividends from capital which is illegal in the UK. One assumes the directors of Carillion individually and collectively were aware of the potential illegality of their decisions. If so, it did not stop them.

Consequently, it might be tempting to regard the case of Carillion, along with many other corporate failures, as being exceptional examples of failures in governance. As the Carillion story unfolds and the wheels of bureaucracy turn to initiate any number of official enquiries, perhaps more evidence will emerge of either individual or collective wrongdoing. In some respects this outcome would obscure what should be general lessons to be learned from the Carillion experience for all company directors and senior executives whose responsibilities include the governance of risk.

According to the Carillion 2016 annual report and accounts, the board of directors maintained a rigorous and robust risk management process. The report identifies what the board considered were the principal risks facing the company. Of interest is the fact that the board considered that there were no significant risks to the future prosperity of the company that might be graded as more than medium on a “net” basis, i.e. after taking into account potential mitigation actions. It hardly bears noting that this supposedly robust system was even at the time of the approval of the 2016 accounts failing dramatically and had probably been failing dramatically for some significant period previously.

As we have observed and commented upon many times, standard approaches to risk management in many organisations are not only inadequate but frequently dangerously misleading as regards existential threats to the company’s existence. The very familiarity of the board with this misleading information leads directly to blindness to the existence of potential or actual existential threats. This is a lesson that all boards and directors can learn from Carillion as well as many other equally painful examples.


Categorizing Different Approaches to Classifying Risks

Classification is one of the most important functions humans perform to speed our cognitive processing of the overwhelming amount of external stimuli we absorb every day.

In the world of risk, many different classification schemes are employed, both formal and informal. But after many years of interacting with them, we remain unsure of whether they clarify or further confuse many of the underlying risk governance and management issues facing boards and leadership teams.

In this note, we’ll try to categorize some of the different risk classification schemes we’ve encountered, and highlight the distinctions they seek to draw.

Broadly speaking, various classification schemes can be grouped into four categories:

  • Potential Causes of Future Risk-Related Events

  • Risk-Related Events

  • Consequences of Risk-Related Events

  • Other Approaches

Potential Causes of Future Risk Related Events

We have often noted how in complex socio-technical systems, causal reasoning is often difficult, because of the dense mix of interrelationships they contain, many of which are characterized by time delays and non-linearities.

However, we often see analyses that classify risks in terms of broad causal forces, such as technology change; environmental change (from macroscopic – e.g., climate change – to microscopic – e.g., antimicrobial resistance); economic and military developments, demographic and social forces, and political and regulatory trends. The World Economic Forum’s annual
Global Risks Report is a good example of the “risk event causes” approach.

Risk-Related Events

This classification scheme is the most traditional, as it is closely tied to frequentist statistics and actuarial science methods that facilitate the quantification, pricing, and transfer of certain types of risk. A good example of this approach is “A Common Risk Classification System for the Actuarial Profession” by Kelliher et al.

A good example of this approach is the division of potentially harmful events into business, market, credit, operational, and more recently cyber risks.

However, as was made painfully apparent in the 2008 financial crisis (not to mention this history of war and politics), this approach suffers from four key shortcomings.

First, not all risks can be easily represented by discrete events; some take the form of gradually accumulating forces that eventually pass a tipping point, causing adverse consequences to accelerate.

Second, the discrete event approach often struggles with “rare event” or “tail” risks, for which historical experience is largely lacking.

Third, capturing the interrelationship between various risks continues to be a challenge, especially in quantitative models.

And fourth, it neglects the fact that complex socio-technical systems are usually characterized by ongoing evolution and the emergence of new phenomena, which reduce the usefulness (or at least the accuracy) of the past as a guide to the future.

Consequences of Risk-Related Events

This is perhaps the broadest approach that is used to classify risk, though at the same time the least consistent. It includes relatively organized approaches to classifying the consequences of risk events (e.g., revenue reduction, cost increase, fall in asset value, and/or increase in liability value), as well as individual categories that aren’t part of an integrated system of consequences (e.g., liquidity risk, reputation risk, strategic/existential risk, etc.).

Risk classification based on consequences also raises questions about sequencing – e.g., what is a first, second, or third order impact. For example, a serious cyber event could lead to weakening sales volumes, pricing pressures, and/or rising costs, which in turn would depress margins, and eventually lead to liquidity problems.

Other Approaches

Distinct from logically sequenced classification schemes based on casual forces, risk events, and subsequent consequences are a number of others that take a different approach.

One example is the distinction that is often made between risk, uncertainty, and ignorance. Events characterized as “risks” can be described statistically, and thus priced and usually transferred. In contrast, “uncertainties” – which cannot be described using frequentist statistics – are both far more common and impossible to transfer via derivative and insurance markets (though they can sometimes be hedged via other means). And ignorance – the realm of Donald Rumsfeld’s famous “unknown unknowns” – is ever present, but of unknowable scope and potential danger.

Other example is the characterization of potential risk events in terms of their relationships to other risk events, and thus their potential to trigger “risk cascades” with non-linear impact.

A final example is the characterization of risks according to either the velocity at which they are maturing, or the net difference between risk maturity and velocity and the time required to formulate and execute an adequate organizational response.

All of these various risk classification approaches have their strengths and weaknesses; each highlight certain aspects of risk, but sometimes at the price of blinding us to others. It is for that reason that we recommend using a combination of approaches – or different frames – when analyzing the risks facing an organization.

This approach almost always produces richer board and management team discussions about risks, as well as superior decisions about how best to govern and manage them.

Probability versus Plausibility in the Assessment of Uncertainty

In their paper “Pursuing Plausibility”, Selin and Pereira note that “our alarming inability to grapple with manifold uncertainty and govern indeterminate complex systems highlights the importance of questioning our contemporary frameworks for assessing risk and managing futures.” Here at Britten Coyne, we couldn’t agree more.

In this note, we’ll look at two concepts – plausibility and subjective probability – that have been put forth by various authors as means of approaching the problem of uncertainty.

Let’s start with some brief definitions. We use the term “risk” to denote any lack of certainty that can be described statistically. Closely related to this is traditional or “frequentist” probability, which is based on analysis of the occurrences and impacts of repeated phenomena, like car accidents or human heights and weights.

In contrast to risk, and as described by writers like Knight and Keynes, uncertainty denotes any lack of certainty in which come combination of the set of possible outcomes, their frequency of occurrence, and/or their impact should they occur cannot be statistically described based on an analysis of their past occurrences. For example, the future impact of advances in automation, robotics, and artificial intelligence on labor markets cannot be assessed on the basis of history.

However, we are still left with the need to make decisions in the face of this uncertainty. This gives rise to three different approaches. The first is what Keynes called “conventions” or assumptions about the future that are widely used as a basis for making decisions in the present. The most common of these is that the future will be reasonably like the present.

As we have repeatedly noted this assumption is often fragile, especially in complex adaptive socio-technical systems which give rise to emergent behavior that is often non-linear (i.e., aggregate system behavior that arises from the interaction of agents, and cannot be predicted on the basis of their individual decision rules). This is especially so in our world of increasingly dense network connections, which accentuates the behavior impact of social observations and thus the systems tendency toward sudden non-linear changes that negate conventional assumptions.

The second is the use of subjective degrees of belief in different possible future outcomes, expressed in the quantitative language of probability. This approach goes back to Thomas Bayes, and is related to the later work Leonard Savage on subjective expected utility. The possible futures themselves can be developed using a wide variety of methods, from quantitative modeling to qualitative scenario generation or pre-mortem analyses. More broadly, all of these methods are examples of counterfactual reasoning, or the use of subjunctive conditional claims (e.g., employing “would” or “might”) about alternative possibilities and their consequences.

The third approach to uncertainty employs the qualitative concept of plausibility or its inverse, implausibility. For example, one approach to future scenario building suggests judging the reasonableness of the results (either individually or collectively) by their plausibility.

A common observation regarding plausibility is the difficulty most people have in defining it, and distinguishing it from degree of belief/subjective probability.

In “
A Model of Plausibility”, Connell and Keane provide perhaps the best examination of how, as a practical matter, human beings implicitly define plausibility (using both a quantitative model and human experimental results).

Their summary is worth quoting at some length: “A plausible scenario is one that has a good fit with prior knowledge…(1) Degree of Corroboration: a scenario should have several distinct pieces of prior knowledge supporting any necessary inferences…(2) Degree of Complexity: the scenario should not rely on extended or convoluted justifications…(3) Extent of Conjecture: the scenario should avoid, where possible, the introduction of many hypotheticals.”

Put differently, a scenario’s plausibility will increase “if it has minimal complexity and conjecture, and/or maximal corroboration.” They go on to create an index in which plausibility equals one minus implausibility, with the latter defined as the extent of complexity divided by (the extent of corroboration less the number of conjectures used).

In another paper (“
Making the Implausible Plausible”), Connell shows how the perceived plausibility of a scenario can be increased when it is represented as a causal chain or temporal sequence.

As you can see, many of the factors which make a scenario seem more plausible to an audience are also ones that would likely increase the same audience’s estimate of its subjective probability.

So is there any reason to use plausibility instead of subjective probability?

Writing in the 1940s, the English economist George Shackle proposed what he called “Potential Surprise Theory” as an alternative to the use of subjective probability when thinking about potential future outcomes.

Shackle’s objection to the use of subjective probability was what he termed “the problem of additivity” in situations of uncertainty, where the full range of possible outcomes was unknown. Assume that at first you identified four possible outcomes, and assigned them probabilities of 50%, 35%, 10%, and 5%, based on the normal practice of probabilities for a given set of outcomes having to sum to 100%.

What happens if you later identify two new potential outcomes? Logically, the subjective probabilities of the new set of six possible outcomes should be adjusted so that they once again sum to 100%. But if those outcomes are generated by complex socio-technical systems (as is usually the case for many business and political decisions), causal relationships are only partially understood, and often confusing (e.g., because of time delays and non-linearities). This makes it very hard to adjust subjective probabilities on any systematic basis.

Moreover, given the presence of emergence in complex adaptive systems, the full set of possible outcomes that such systems can produce will never be known in advance, making it impossible for the probabilities associated with those possibilities that have been identified to logically sum to 100%.

Instead of quantitative probabilities, Shackle suggested that we focus on the degree of implausibility associated with possible future outcomes, as measured by the degree of surprise you would feel if a given outcome actually occurred. Importantly, and unlike probability, the extent of your disbelief (potential surprise) in a set of hypotheses does not need to sum to 100%, nor does your degree of disbelief in individual hypotheses need to be adjusted if additional hypotheses are added to a set.

Just as important, in a group setting it is far easier to delve into the underlying reasons for an estimated degree of implausibility/potential surprise (e.g., the complexity of the logic chain, number of known unknowns and conjectures about them, etc.) than it is to do this for a subjective probability estimate (just as an analyst’s degree of uncertainty about an estimate is easier to “unpack” than her or his confidence in it).

In sum, most of the decisions we face today are characterized by true uncertainty rather than risk. Rather than defaulting to subjective probability methods when analyzing these decisions, managers should consider complementing them with an approach based on implausibility and surprise. Ideally, the two methods should arrive at the same conclusion; their real value, however, lies in situations when they do not agree, which forces boards and management teams to more deeply explore the underlying uncertainties confronting them.