The assessment of risk is enshrined within an array of safety and environmental legislation. Risk assessment is based on a function of the consequence and likelihood of the risk and prefaced with a requirement that the process is undertaken consultatively.
Interestingly, ISO 31000 (2018) defines likelihood as the chance that something might happen as defined, determined, or measured objectively or subjectively, and acknowledges that “… risk analysis may be influenced by any divergence of opinions, biases, perceptions of risk and judgements.“ (The author would probably go a step further and advocate that the risk analysis process will be influenced by a divergence of biases and perceptions.)
While there is an acceptance that the process is interwoven with subjectivity and opinion – especially when trying to assess the likelihood for consequences that have not happened yet, or only rarely occur – this has the potential to significantly skew the assessment result.
There are a number of ‘human tendencies’ that we, as facilitators of risk assessments, need to be aware of when assessing likelihood:
- We try to hang our (hard)hat on hard data – There is a tendency to grasp for and align incident data, at the expense of ensuring the context for that data is the same (as if hard statistics adds further validation). To use the data, we need to understand its context and ensure it aligns with ours. For example, while there has been a history of plant roll-overs in the broader industry there have not have been any incidents in your organisation, potentially due to the robust operator ‘verification of competency’ process and the organisation’s uploading of preventative maintenance, therefore to consider only the industry-wide likelihood would do a disservice to the current controls and misrepresent the context.
- The use of Precedents – Similarly, precedents or historical incidents tend to skew our perception and prompt a short-cutting of the actual risk assessment process. With the precedent forefront in our mind the likelihood of something happening at our site becomes more aligned with the likelihood of the site where the incident occurred. But this skips over the assessment of the context and the current controls that may be in place on our site.
- Our inherent biases – Numerous studies have identified a number of inherent issues given the way our brains are individually wired. For example, if we are personally sensitive toward a specific risk issue, possibly due to having experienced it directly, it has a tendency to heighten our perception of the likelihood score based solely on our perception. for example, someone who fell asleep at the wheel of their car may be hyper-sensitive toward fatigue when driving, potentially clouding objectivity on fatigue related risks.
- And the ‘likelihood’ terminologies don’t help – The language of the semi-qualitative likelihood categories (i.e. the traditional ‘rare’, ‘likely’ or ‘possible’) prompts personal biases as the words themselves are interpreted to (or actually) mean different things to the different people involved. From our life education we come with a pre-conceived idea of what certain words and phrases mean, yet quite often this is different to the person sitting next to us.
The importance of applying the risk management process to its fullest, inclusive of a consultative approach, is critical when assessing risks, and especially when determining likelihood – whether we have hard data to consider or no data at all. As professionals we try to be ‘unbiased’ and ‘objective’, especially when there is a lack of hard (incident or risk) data … but really, can we ever truly achieve this?
The best approach to mitigating these issues is via a robust consultation process (sharing one’s opinions and thoughts and consolidating these to achieve an agreed position) and then by ‘pressure-testing’ the outcome of the risk assessment.
Please contact QRMC for more information.