The Swiss Cheese Model of incident causation

We’ve all experienced it – when someone makes an observation that encapsulates what we felt we knew all along but had yet to (at least so eloquently) articulate, or that would, so we charitably tell ourselves, have been obvious to us had we given the matter any thought. Paradoxically, though, such clarity of thought is often the product of deep reflection, enquiry and perception.

So it is with the so-called Swiss Cheese Model (SCM) of incident causation, formulated by the late James Reason. Well known in the field of safety (notably in the extractive industries, medicine and aviation) from which it originates, many of the key principles underpinning the SCM have applications in other domains, including corporate governance.

The SCM explained

The SCM is a model for analysing for the causes of incidents that illustrates how adverse outcomes (accidents) often arise out of the interaction of multiple elements in an organisational system.

A basic premise of the SCM is that humans are inherently fallible and errors are to be expected. This being so, it is necessary for a system to have adequate defences (barriers or safeguards) that are designed to prevent errors from occurring, or if they do occur, in limiting or localising their impact. If a serious accident or error does occur, this is seen as being primarily a result of systemic factors, i.e., a failure in multiple defences. As one author has put it in an accident context, the site of a major incident can perhaps be more helpfully considered as a spectacle of mismanagement than the fault of a particular individual.

The slices

The SCM visualises an organisation’s defences against errors/accidents/threats as layers of Swiss cheese, each slice representing a different defence. Defences can include policies, procedures, processes, barriers or structures. For example, in a governance context, an organisation’s defences might include governance structures and reporting lines; the experience and expertise of its personnel; a specific corporate culture (although this can also be a hole); its various policies and procedures; its processes for collecting, analysing and reporting information/data; internal and external audit processes; delegation standards; and so on.

The holes

The distinctive holes in each slice of cheese represent the imperfections or gaps in or features of the relevant defence. These holes – whose locations are not necessarily fixed, and which may open or close over time depending on the circumstances – may be systemic in nature (Reason called them “latent conditions”) or arise from active failures.

Latent conditions are inherent features of the system – Reason used the word “pathogen” – that can render the system vulnerable to adverse effects. They arise from the decisions made by those who can influence aspects of the design and operation of the system itself – the board, senior management, procedure writers etc. Latent conditions can introduce vulnerabilities in two ways.

First, latent conditions can create or foster an environment in which adverse events are more likely to occur. An example of this, in a governance context, could be an obsessive senior management focus on sales or revenue growth, with incentive structures aligned with this, and less concern for ensuring that sales are profitable or fit a particular risk profile, which thereby actively or passively encourages both riskier behaviour and reduces the prospect of proper oversight. Other examples could be cost-cutting measures that lead to a loss of experienced staff or pressure to increase productivity which encourages cuts in quality control.

Second, latent conditions can create long-lasting holes or weaknesses in the defences themselves: there are procedures, but they are not entirely fit for purpose; there is a board of directors, but none are independent; there is an internal audit function, but its purview is too restricted; there is a remuneration policy but the incentive structure is flawed and so on. By themselves, these do not result in adverse outcomes, but in the right conditions, they can facilitate unwanted outcomes.

Active failures are the result of a person (or thing’s) interaction with the system and can take a variety of forms: oversight, a lapse in judgement, a mistake, recklessness or a deliberate or wilful violation of procedure or exploitation of a weakness.

So, for example, in the case of Australia’s largest corporate collapse, that of HIH Insurance Limited, overpaying for acquisitions, inadequate provisioning, imprudent overseas expansions and ill-conceived divestments could all be considered as active failures that contributed to the company’s downfall. The systemic factors – the latent conditions – that facilitated these poor decisions included (this is by no means a detailed summary) a lack of strategic thinking or awareness at the board level, poor risk-management systems, a corporate culture characterised by a dominant CEO to whom both the board and senior management were unduly deferential, a lack of understanding of fundamental risks and their measurement, poor governance processes and documentation, conflicts of interest and inadequate board leadership.

Defence in depth

A means by which the system may be strengthened is through layering defences: if there is a failure in one or more defences, most of the time a threat is neutralised or its effects mitigated by another defence in the system. Thus, even if a threat penetrates one or more defensive holes, it will often collide with a solid barrier in another defensive layer. The existence of holes in each defensive layer do not normally lead to negative outcomes. Occasionally, however, the holes in each defence align in such a way that the defences collectively prove inadequate to overcome a threat to the system, resulting in an adverse outcome or event.

Insights

Although the SCM was originally formulated to help explain how accidents in an industrial or medical context can occur, it is, by virtue of its focus on the interaction of elements in an organisational system, of broader application. Underpinning the model are a number of important insights and observations:

  • Major adverse outcomes are rarely the result of a single cause. Often, if not in the vast majority of cases, major adverse outcomes result from a series of smaller errors or issues which in isolation may not cause a major problem, but which combine, cascade or compound into something much more significant, sufficient to overwhelm the system defences. As the Paul Kelly and Kev Carmody song, familiar to many Australians goes, “from little things, big things grow.”
  • We cannot alter the human condition but we can alter the conditions in which humans work. Given our inherent fallibility as human beings, and thus the expectation that errors/bad behaviour/judgement is to be expected, and the inherent riskiness in many organisational activities, a key consideration is to ensure that systems have multiple defences (have redundancy). So, for example, a risk-management policy should be accompanied by some process(es) for verifying compliance with it. When there is a failure, a key focus of enquiry should be on why the system failed. A myopic focus on “who” failed, or seeking to attribute fault to a singular root cause, may do little to address the systemic risk.
  • Systemic issues can often be identified and resolved before an adverse event occurs. It may be hard, or impossible, to predict when an active failure will occur – when a particular person will make an error or poor decision, when someone decides to commit an act of fraud etc. By contrast, many systemic defects can be identified and addressed proactively. This is how high reliability organisations (HROs) – manufacturers of pharmaceuticals, nuclear power plants, air traffic control towers – can consistently operate under very demanding conditions for extremely long periods of time without incident. A fulsome explanation of how HROs do this is beyond the scope of this article, but, in summary, Weick & Sutcliffe, in their book, Managing the Unexpected: Sustained Performance in a Complex World identified five traits that HROs have in common:
    • HROs have a preoccupation with failure and continually examine their processes for potential weaknesses, listening for and acting upon, weak signals of system dysfunction;
    • HROs recognise that their activities are complex and that complex problems frequently require complex solutions. This manifests in a willingness to constantly challenge existing beliefs and mine data to establish benchmarks and assess performance;
    • HROs are cognisant that detailed knowledge of their operations exists not at the top of the organisational hierarchy, but with the people close to daily operations. HROs foster an environment of openness – of psychological safety – in which there is regular and open communication with employees, encouraging them to share concerns, ensuring their reports and suggestions are taken seriously, and providing feedback when information is shared;
    • HROs prioritise expertise over authority, cultivate diversity and devolve and distribute decision making authority, rather than concentrating it, and
    • HROs complement their anticipatory activities by developing organisational resilience, that is, an ability to detect, respond to and contain inevitable problems.
  • The greater the number of holes in the defences, or the larger those holes are, the weaker the defence and the system in general. In other words, to build a more resilient system, the focus should be on improving the defences, by reducing or eliminating systemic weaknesses and by adding additional layers of defence. However, there is a paradox. Adding layers and increasing coupling of elements in a system creates further complexity. This can itself be a latent defect that can render the system as a whole more vulnerable. Resource constraints aside, there is therefore a balance that must be struck in risk reduction efforts and complexity.

Some limitations in the metaphor and concluding observations

The Swiss Cheese Model visualises a threat as having a linear trajectory – that is, that an adverse outcome is the result of a successive series of failures in system defences in response to one or more events or acts. The linearity of the diagram suggests a chain of causation – of events or breaches of defences – connected, in some fashion, in time or place. This, and the suggestion that system or latent defects are capable of being identified and fixed implies that elements in the organisational system are connected in such a way that enables us to predict a result (with varying degrees of confidence) and, based on those predictions, we can prescribe solutions.

However, organisations are complex systems. They comprise a variety of separate systems, all interacting with one another, with feedback loops and similar connections to the outside world. Predicting the outcome of interactions in complex systems can be extremely difficult, if not impossible. Thus, at the time (rather than with the benefit and bias of hindsight) threats may not be perceptible, their impact unpredictable and therefore the means by which they can be mitigated potentially indeterminable.

A further issue is the difference between the system as designed, and the system in practice. The way a system – say a governance and risk-management system – is described on paper, and how it actually operates in practice, are often very different. At the end of the day, it is the system as it operates that needs to be understood (as far as this is possible) and optimised.

Effectively understanding what is not working and why, and learning from “near misses”, is exceptionally difficult. Organisational resilience requires an ability to anticipate, respond, learn and adapt to unforeseen events and circumstances. This is a topic to which we will return in a subsequent article.

Despite these limitations, the Swiss Cheese Model is a useful tool for conceptualising and thinking about both the components of a governance (or other organisational) system and evaluating failures and near misses.

Source: Johnson Winter Slattery

Leave a Comment


Warning: Undefined array key "sfsi_riaIcon_order" in /home/monitlive/public_html/wp-content/plugins/ultimate-social-media-icons/libs/controllers/sfsi_frontpopUp.php on line 165

Warning: Undefined array key "sfsi_inhaIcon_order" in /home/monitlive/public_html/wp-content/plugins/ultimate-social-media-icons/libs/controllers/sfsi_frontpopUp.php on line 166

Warning: Undefined array key "sfsi_mastodonIcon_order" in /home/monitlive/public_html/wp-content/plugins/ultimate-social-media-icons/libs/controllers/sfsi_frontpopUp.php on line 177

Warning: Undefined array key "sfsi_mastodon_display" in /home/monitlive/public_html/wp-content/plugins/ultimate-social-media-icons/libs/controllers/sfsi_frontpopUp.php on line 276

Warning: Undefined array key "sfsi_snapchat_display" in /home/monitlive/public_html/wp-content/plugins/ultimate-social-media-icons/libs/controllers/sfsi_frontpopUp.php on line 285

Warning: Undefined array key "sfsi_reddit_display" in /home/monitlive/public_html/wp-content/plugins/ultimate-social-media-icons/libs/controllers/sfsi_frontpopUp.php on line 282

Warning: Undefined array key "sfsi_fbmessenger_display" in /home/monitlive/public_html/wp-content/plugins/ultimate-social-media-icons/libs/controllers/sfsi_frontpopUp.php on line 279

Warning: Undefined array key "sfsi_tiktok_display" in /home/monitlive/public_html/wp-content/plugins/ultimate-social-media-icons/libs/controllers/sfsi_frontpopUp.php on line 273
error

Enjoy this blog? Please spread the word :)