To the Editor:
We congratulate Nanji et al.1 for their recent prospective, observational study defining the frequency of medication errors (MEs) and potential adverse drug events (ADEs) in the operating rooms of the Massachusetts General Hospital, Boston, Massachusetts. We read this article with great interest, considering the sensational headlines it has generated in the mainstream media because the incidence of MEs was much higher than previously described. We must take this information seriously and identify methods for reducing MEs and ADEs; however, because of the effort and resources required to address such issues, we must also question the validity and consistency of these data and the conclusions they have generated.
To examine the accuracy of the measured ME rate, we must begin by examining the definition of ME used by Nanji et al. While adopting the definition to the perioperative setting, the author combined a commonly used definition for ME with one that is taken from an article on medical errors not MEs.2 In this light, the authors’ definition of ME becomes one of either ME or medical error, which may have contributed to the broad and in some instances counterintuitive examples of ME given in this study. For instance, failure to document intubation or not checking blood pressure before induction, although clearly errors in and of themselves, would not be considered by most physicians as MEs. Likewise, the conclusion that the increased incidence of MEs in this study compared to historical observations is due to “provider reluctance to self-report errors or failure of providers to recognize errors” is not adequately substantiated based upon this changed, and in our opinion flawed, definition. Furthermore, there is no indication that clinical context was considered in these definitions. For instance, a responsible anesthesiologist not only considers current state patient conditions, but also anticipates future stimuli. What an observer may deem a delay in therapy (i.e., “7-min delay in administration of ephedrine,” table 5) may in fact be an intentional medical decision based upon current and anticipated future patient condition. Considering the broad definition of MEs and the failure to consider clinical context when recording MEs, the authors report a higher incidence than what would otherwise have been noted with standardized definitions that we are not convinced are appropriate. We should be cautious to accept the reported results as actionable within this framework.
In addition to the definitions applied, we are also concerned about the methods used to detect MEs/ADEs. Medical simulation, heralded as an innovative solution promoting patient safety, teaches that observation of an error alone is insufficient to spawn effective solutions and behavioral change. Watching an error occur without asking “why?” and then proposing a solution is analogous to debriefing without allowing participants to speak. Unfortunately, the study at hand seems to have used this methodology to conclude that “point-of-care bar code–assisted anesthesia documentation systems” can “eliminate” up to 17 and 25% of MEs and ADEs, respectively. We believe this conclusion to be expansive as the authors overlooked the impact of frames on decision-making. Such oversimplifications are attractive but potentially costly. In a Joint Commission publication, Chassin and Loeb3 cited the failure “to resist the temptation to simplify” as a frequent impediment to safety efforts in health care.
Well-designed solutions are targeted, people-centric solutions that embrace the complexity of our healthcare system and behavioral psychology. The authors’ suggested processes should be created to reduce opportunities for workarounds, not reinforce old habits. As an example, the authors state that, “In most instances where the labeling system was not used, manual sticker labels were available, and the provider used those instead.” The fact that people chose not to use the new sticker system should attest to the flaw in adopting that technology as the solution. Technology and processes should be so well designed that no workaround is needed. With the expanding cost of providing quality health care in America, we should be cautious when recommending technology-based interventions. Process-based interventions like “heavy user training,” as the authors’ suggested, is an expensive cure when the technology in question does not work intuitively.
In an article on Design Thinking in Harvard Business Review, Brown4 stated, “Innovation is powered by a thorough understanding, through direct observation, of what people want and need in their lives and what they like or dislike …” With ever-shrinking resources, solutions should be tailored, nuanced, and people-centric, taking a “holistic design approach” that begins by engaging frontline clinicians in dialog. An observational study like this is important in furthering our understanding of MEs/ADEs; however, the accurate categorization, reporting, and deep exploration of each observed error is critical as we develop sustainable change together.
The authors declare no competing interests.