You are specialised in Human Reliability Assessment, what is it exactly?
Safety analysis is carried out by two interacted and complementary methods – deterministic and probabilistic. The first, deterministic safety analysis (DSA) should analyse all local hazards in depth, and the second, probabilistic safety analysis (PSA) should give a more general (global) risk characterization of using the system consisting of hardware, software, environment and liveware (people and organization).
Human Reliability Assessment has become a very important part of PSA. Historically the focus of PSAs has been on modelling of hardware and its impact on the plant safety level. In the last few decades, the focus of PSA’s has evolved to include a much bigger attention on human errors.
Isn’t all error human?
Well yes, if one thinks of it as human intelligence conceiving, designing, producing, maintaining and using an equipment or a software, then indeed you can consider all error is human-made so to speak.
Indeed, the human element is inevitable in any complex system, be it socio-technical, economic or political. He/she initiates any system’s creation and is responsible for any design basis and beyond that, any interaction in it, whether it is passive, active or fully automated.
Therefore, the suspicion that “human error” is a root cause of any accident or disaster is not without reason. But often this can be the result of superficial analysis and a desire to blame without thoroughly investigating the contexts of generation and occurrence of the unwanted event in the system.
The approach of Human Reliability Assessment is to see how to prevent human error and quantify its risk of happening and its potential severity by a holistic context evaluation.
What is a holistic context evaluation?
A holistic context evaluation takes into account more than the human behaviour or attitude, it includes them in the dynamic interactions between humans, technology, organisations and environment. In this system approach, it isn’t important “who blundered, but how and why the defences failed (Reason, 2000).”
Its purpose is to trace the symptoms and causes of an unwanted event over time, in depth and holistically, without missing and neglecting possibilities that impact on the risk of using the system and make it unfriendly to the user.
So how do we know how and why the defences failed?
The experience of severe accidents in complex multi-barrier safety systems teaches that they result from at least several failures, mistakes, violated or unforeseen circumstances.
Therefore, the timely detection of deviations in its operation is of paramount importance for the operator to respond in a timely manner in the best possible way.
As a consequence, before looking for root causes (equipment failures, circumstances or human interference), we must consistently follow the chronology of the accident from the symptoms (cues – signals, symbols, signs, …) to the root causes.
In the symptom-based approach, recognizing each symptom is a mental process involving individual cognition, communication between the group of operators, decision-making, checking and recovery. The symptom-based approach is now usual in nuclear accident management.
The study of failure events and erroneous actions in risky systems requires a complex, systematic and system approach to attain the highest standards of safety, a value and an imperative in the nuclear industry.