ALTHOUGH one may be tempted to neglect rare complications because they are infrequent and therefore difficult to study, they require our attention for several reasons. They are often severe. They are often thought by our patients and their families to be unacceptable. They occur in young and otherwise healthy patients. These comments are valid for most severe complications of regional anesthesia and particularly for spinal hematoma or meningitis associated with central neuraxial blocks. In this issue of Anesthesiology, Moen et al. 1report the results of a large investigation of severe neurologic complications after central neuraxial block. This study represents an enormous piece of work, and the authors should be commended for their efforts to gather data as completely as possible. The study also allows us to consider several features related to patient safety and to discuss emerging thinking in the study of rare events. General lessons can be made in three areas: how to collect data for rare events, how to analyze these data, and how to select and implement strategies to improve patient safety.
The reporting systems to study a rare event must cover a large number of institutions and must usually be implemented at a nationwide level (or even at a multinational level, as already done in studies related to aviation safety§). A system may rely on mandatory reporting; this is the case with regional blocks performed in Sweden during a 10-yr period thanks to the unique national registry to which all severe complications should be reported. However, even such a mandatory system does not guarantee that all existing cases will be reported, which may result in too few cases to identify causal factors to develop an effective safety strategy. This explains why Moen et al. judiciously searched for other sources of information. Voluntary reporting systems offer a useful alternative (or addition) despite a significant risk of underreporting.2Because results are often debated locally, even more potential exists to improve the behavior of those acting in the cases, conduct in-depth causal analysis of cases, and identify precursory events.3,4When precursors are identified, the usual strategy consists of expanding the scope of the reporting system to include near misses (fig. 1).5When safety has been improved to such a high level that events occur very infrequently or even have not yet occurred (such as in the nuclear power industry), reporting systems are no longer efficient. Modeling risk becomes necessary and is a preemptive strategy based on prediction of what could happen.6Unfortunately, risk control in anesthesiology has not yet reached such a high reliability level.
Regarding causal analysis, the authors rightly point out that, during the study period, Swedish anesthesiologists perceived thromboprophylaxis as a limited risk factor for spinal hematoma and that the first Swedish guidelines describing the management of regional anesthesia in patients receiving low-molecular-weight heparin were published after the end of the study period. However, the methodology does not allow us to establish any causal relation between factors and outcome. Behind the outcome is the process of care, and we must move from the question “What happened?” to “Why did it happen?” Moen et al. report 11 cases of hematoma that occurred in patients with coagulopathy or thromboprophylaxis administered in close relation with central neuraxial block. In some of these cases, failures would certainly have been identified, had the method been designed to address the process of care. These failures (e.g. , low-molecular-weight heparin overdose in elderly patients) are also contributing factors. However, shifting from “What happened?” to “Why did this happen?” also requires one to change investigation tools. According to James Reason, patent failures are those committed by clinicians working in direct contact with patients, whereas latent failures represent the consequences of structural, technical, or organizational characteristics often related to management decisions.7Patent failures include human errors and have already been studied in anesthesia.8Root cause analysis used by the Joint Commission of the United States∥or the systems analysis used by Vincent et al. in London#are typical examples of innovative methods to study system errors. However, several biases can occur when using these methods, especially because outcomes (or events) are being investigated. An outcome bias can interfere with analysis because those reporting the event are obviously aware of the clinical outcome. Propensity to a more severe judgment is often associated with a poor outcome.9Hindsight bias is the exaggerated extent to which individuals indicate they would have predicted the event beforehand.10Although reduction of this bias is difficult, a way to reduce it could be to systematically ask people to consider all other possible solutions that could explain what happened and to state all the reasons why other causes might have been correct.11For example, in the study by Moen et al. , only meningitis of bacterial origin was considered, although aseptic meningitis might have occurred as well.12,13However, irrespective of the risk of bias, it is an added value to share not only the result (i.e. , the incidence) but also the content of the case analysis with a large number of practitioners. Case reports published in scientific journals are probably the best way to achieve this goal. Although they are often considered minor scientific contributions, case reports have sometimes had a greater impact on clinical practice than most randomized trials. The description by Albright14of a small series of cardiac deaths after bupivacaine administration, the description by Schneider et al. 15of transient neurologic symptoms after spinal lidocaine administration, and the recent cases of cardiac arrests after administration of large doses of ropivacaine16,17are three examples of how case reports can strongly impact the thinking of a whole medical specialty. Case reports can provide a view of the healthcare system, and journals should facilitate publication of clinical incidents that describe the chain of events and the contributory factors because these reports have high educational value.
Estimating the incidence of events is also relevant because strategies to control the risks differ widely at different levels of incidence. When the system is unsafe (incidence close to 10−2), the most effective safety strategies aim at increasing the constraints placed on stakeholders while providing rapid technological enhancements: increased training, more rules, more protocols, and even a strict sanction policy toward rule breakers. Technological progress is the other high-priority goal because it is commonly accepted that it eventually contributes more to improving safety than repressive measures do. At this stage, feedback on accidents and serious incidents is sufficient to foster progress. Accidents and incidents adequately represent future risks: at 10−3, yesterday’s accident will be tomorrow’s accident, if no measure is taken.18Between 10−4and 10−5, safety is much better, and events are less frequent, with the consequence that the above-mentioned strategies will be much more difficult to implement and much less effective. At this stage, any single accident or incident that occurs is unlikely to reoccur. Safety strategies logically focus on the need for an enforced culture of safety, stabilizing the effectiveness of the system among all contexts and all those involved.
To summarize, the lessons gained from the study of rare complications could be threefold:
The more rare an event is, the greater the need is for an in-depth and professional analysis of the few existing cases to determine relevant precursors.
Organizing a sentinel event system and detecting relevant precursors in near misses are probably the core of the most comprehensive strategy for continuous improvement.
Reducing the risk associated with an event changes the strategies to cope with the residual risk.