“Unfortunately...the existence of evidence and creation of guidelines or recommendations are rarely sufficient to change practice.”
In the 1960s, sociologist Everett Rogers puzzled over why farmers, when confronted with the opportunity to use superior seeds, decided to keep using the seeds they had always used. His observations formed the underpinnings of his diffusion of innovations theory, which explains how technologic innovations came to be adopted across populations.1 Diffusion of innovations is one of the foundational theories of implementation science,2 the discipline concerned with promoting the uptake of evidence-based interventions meant to improve health.3 Perhaps, similar to Rogers, anesthesia professionals have puzzled over why evidence-based interventions in perioperative care—examples include the use of active warming during general anesthesia and the use of protocols and checklists to structure team processes—are not consistently used in practice.
In this issue of Anesthesiology, Weigel et al.4 tackle the adoption of quantitative neuromuscular monitoring, an evidence-based practice shown to decrease the incidence of residual weakness after the use of nondepolarizing neuromuscular blockers.5 As part of a robust professional practice change initiative, the authors used best practices in quality improvement to specify a change goal: documentation of train-of-four ratio greater than or equal to 0.90 for all patients. They documented compliance and other key outcomes over time, noting when key project milestones were reached to facilitate the attribution of change to specific actions by the quality improvement team. The authors used implementation strategies that seem well matched to the local contextual factors governing the use of quantitative neuromuscular blocker monitors. Specifically, they placed monitors in all operating rooms, selecting monitors based on feedback from clinicians and projected disposable costs; developed educational videos to instruct clinicians; instituted automated alerts using a customizable clinician support system; included neuromuscular blocker monitoring in their Ongoing Professional Practice Evaluation metrics; linked quantitative neuromuscular blocker monitoring to credentialing; and sent department and individual email messages about performance.4 These strategies are consistent with those described in the Expert Recommendations for Implementing Change,6 a compilation of implementation strategies that can be combined to facilitate individual and organizational behavior change. As a result of their efforts, the authors were able to achieve greater than 90% compliance with train-of-four documentation that persisted for more than 6 months. These changes were associated with decreases in postanesthesia care unit and hospital length of stay and a decrease in pulmonary complications.4
The use of nondepolarizing neuromuscular blocking drugs to facilitate intubation and surgery is second nature to anyone practicing anesthesia. We must use these drugs with care, however, because residual neuromuscular blockade after surgery has clearly demonstrated adverse outcomes for patients, including hypoxemia, a need for reintubation, and pneumonia.7 Residual neuromuscular blockade is more likely when the depth of neuromuscular blockade is not monitored at all or when it is monitored qualitatively (e.g., inspection and estimation of the train-of-four ratio) rather than quantitatively (as with acceleromyography or electromyography).5,7 For this reason, in 2018, a panel of experts called for abandonment of qualitative and clinical tests of muscle strength in favor of quantitative monitoring.8
Unfortunately, as many of us know, the existence of evidence and creation of guidelines or recommendations are rarely sufficient to change practice. Weigel et al.4 noted inertia with respect to the adoption and use of quantitative neuromuscular blocker monitoring in their institution and undertook a quality improvement initiative to change practice. In so doing, they designed a robust project that combines tenets of quality improvement and implementation science. This combination is not just interesting to the practicing anesthesia provider; it also offers insights into the deliberate selection and measurement of change strategies in anesthesia care.
Quality improvement and implementation science are similar in that they are focused on behavior change in organizations. They are different in their focus on the creation of local versus transferable knowledge (quality improvement is local); the explicit use of theories, models, and frameworks to guide study design, measurement, and reporting (implementation science relies heavily on theories, models, and frameworks); and the use of qualitative and mixed methods to understand context (implementation science commonly uses these approaches). Quality improvement is often funded locally, while implementation science has enjoyed increasing attention from research funding agencies that see the field as a way to realize the return on investment in basic and clinical research innovations. Despite the fields’ differences, quality improvement– and implementation science–informed approaches to change management are not mutually exclusive; techniques from each can be combined to powerful effect, as demonstrated by Weigel et al.4
The astute reader will note that Anesthesiology does not often publish quality improvement reports. What do we have to learn from single-site experiences? From the implementation scientist’s perspective, the answer is obvious. “Trusted Evidence: Discovery to Practice®” is in the journal’s masthead. The evidence—here, the use of quantitative neuromuscular blocker monitoring—must reach practice to improve the care and outcomes of our patients. The report by Weigel et al.4 provides insight into how that translation into practice might happen. It also demonstrates that change occurs over time and that sustained changes (i.e., those lasting months to years) may require different strategies over time to reach a goal performance target. Of course, this practice-based evidence does not replace the more conventional, hypothesis-driven controlled trials that provide evidence of efficacy and effectiveness. Rather, these two types of evidence are complementary and reflect the complexity of modern anesthesia practice, which aims to continually improve patient care and outcomes.
The author is not supported by, nor maintains any financial interest in, any commercial activity that may be associated with the topic of this article.