Ebola lapses show lab safety protocols should factor in human error

Christmas Eve brought the unwelcome news that a lab worker at the Centers for Disease Control and Prevention may have been exposed to the Ebola virus. It was the latest in a series of similar lapses. Citing such problems, the Obama administration in October suspended some government-funded research projects involving genetic modification of viruses that have the potential to set off a worldwide epidemic. The lapses reported so far have not involved serious injuries or fatalities. But is the lack of serious harm evidence that current safety measures are effective, or are the lapses early warning signs of systemic problems?

The recent Ebola exposure at the CDC occurred just four days after a symposium on lab safety at the National Academy of Sciences in Washington. A broad cross section of experts, including me, assembled to ponder such questions, to debate the risks and benefits of the suspended research and to begin to discuss how to implement and enforce risk assessments of laboratory processes.

On one side of the debate were virologists, postdoctoral researchers and graduate students for whom the research moratorium represented a threat to their professional careers. They advocated for the public health benefits of their work and warned of the risk to America's scientific leadership posed by the moratorium. On the other side sat bioethicists, public health experts and a nervous public, all expressing concern about safety and bioterrorism.

Everyone debated: Can scientists safely genetically modify and propagate some of the most dangerous viruses on the planet? My role as a scientist involved in human behavior and safety-related decision-making was not to take sides on the issue but to talk about the human factor in safety precautions and lapses. In other words, to help figure out how to complete a risk assessment that is robust enough to protect the health and safety of the general public given the truism that "to err is human."

Nearly every major biosafety lapse in 2014 had some form of human error at its core — a sobering fact. By studying these failures, and the major and minor slip-ups that could cause breakdowns in labs, we can better understand and prepare for what can go wrong, whether it is caused by human error or intentional malfeasance.

In terms of exposure, the scientists who work directly with these viruses have the most at stake and are highly motivated to strictly observe safety measures. Many researchers believe that labs with the highest risk not only meet the highest safety standards but are "tight ships" with multiple fail-safes designed to protect against not just unintentional errors but intentional misconduct such as sabotage or theft. And yet lapses occur. Some exposures and near misses are publicly reported, but government studies suggest others are missed, for example, due to delayed onset of symptoms, misinterpreted symptoms or confusion about reporting obligations.

The discovery of serious lapses shows that the safety systems in place need to be significantly improved to be more effective. We need systems that protect us from the consequences of human limitations while drawing on our human strengths. While we can and should improve education, training and oversight to reduce the occurrence of error — something as simple as a checklist has been shown to effectively reduce human error in hospital settings. But we need to plan as though human error is inevitable. Research suggests that even the most experienced and knowledgeable workers sometimes cut corners and that everyone is susceptible to distraction, fatigue and faulty reasoning.

When determining whether a particular protocol should be allowed, we must implement lab-safety programs that include risk analyses built on the assumption of human fallibility. Moreover, federal funding agencies could mandate the training and continued education of researchers at all levels — from undergraduates to scientists who are far along in their careers — in a high-quality curriculum that includes information about traits of human cognition and behavior that contribute to safety risks.

Finally, we must also acknowledge that as human beings we are subject to certain biases, including being overly optimistic about our ability to control our environment. Also, when we make risk-based decisions on a project-by-project basis, we may underestimate cumulative risk — for example, an institution that allows individuals to adopt small but controllable risks can find that over time they are exposed to greater risks than that institution can manage.

The imperatives of great science are not identical to those of public health and safety. This very old story that runs through our literature and history is now almost ever-present in our daily news. Given the potential life-saving impact of cutting-edge virus studies and the magnitude of the impact if a major exposure occurs, it is more important than ever to bring public safety and scientific policy together, informed by our understanding of human flaws and capabilities.

Gavin Huntley-Fenner, who has a doctorate in brain and cognitive sciences, is an Irvine-based safety consultant specializing in human behavior, risk perception and risk communication.

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *