In Reply:—

Dr. Obst argues that the death rate noted in our study 1is far higher than 1:250,000 or the 0.0004% rate 2for “anesthesia mortality” and therefore suggests a flaw in our analysis. Dr. Obst compares the overall 3.5% 30-day mortality rate after general and orthopedic surgery reported in our study (and other studies 3) with the quoted anesthesia mortality rate to suggest our results are inconsistent by an astounding 9,000-fold ratio. However, results from our model attribute the lack of direction from an anesthesiologist to be associated with an excess of 2.5 deaths per thousand admissions (0.25%), not 35 deaths per thousand (3.5%). Using the 3.5% figure would imply that all deaths after surgery are caused by anesthesia practice. Although we believe anesthesia is critical to patient outcome, we have never contended it was solely responsible for all deaths after surgery. Using 2.5 excess deaths per 1,000 (0.25%) and the Eichhorn 4anesthesia mortality estimate of 1:151,400 would suggest a 378-fold increase, as we discussed in our paper. Nevertheless, the question remains: is our study consistent with anesthesia mortality as reported in other studies? We believe it is consistent for five reasons. First, comparing anesthesia mortality to excess deaths associated with lack of direction as reported in our study represents a comparison of different quantities. The anesthesia mortality rate is a figure used to track and compare immediate and clearcut anesthesia-related deaths only. The measure was intended to be highly specific for anesthesia events but makes no claims regarding sensitivity. In contrast, 30-day mortality is intended to reflect the full impact of differences in anesthesia practice. This distinction is obvious to any clinician providing anesthesia care to an elderly patient—a risk of only 1 in 150,000 is certainly negligible, but no clinician would dismiss the risks of anesthesia in the elderly as negligible. Clearly, counting only clearcut anesthesia-related deaths underestimates the full risks of anesthesia. If better anesthetic care can reduce deaths, it is valuable, even if when using claims data, patient by patient, it is not always possible to say this one action caused that one death. Smoking causes large numbers of deaths from cardiac disease, but, among smokers, we cannot tell which particular deaths were caused by smoking and which were caused by something else. Nonetheless, we advise people to quit smoking, confident that quitting will reduce their risk of death from cardiac disease. In the same way, our study reports the significant association between lack of direction by the anesthesiologist and death, suggesting that anesthesiologist direction reduces the risk of death. Second, anesthesia mortality as used by Dr. Obst is not a risk-adjusted statistic, unlike the results of our report. Rates of mortality may be orders of magnitude lower for young patients with easier case mix than older patients with more difficult case mix. Third, anesthesia mortality is an inferior outcome statistic because it is susceptible to bias related to the ability of caregivers to temporarily prevent “deaths on the table” that are followed by subsequent death in the hospital or even after discharge from the hospital. It is precisely to correct for the classic “discharged quicker and sicker” bias 5that the 30-days-from-admission figure has been used by health services researchers. Counting only “deaths on the table” or clearcut immediate anesthesia mishaps would drastically undercount anesthesia-related deaths, reducing sensitivity, and would be highly susceptible to bias across caregiver abilities to temporarily prolong life for hours or days—an undesirable feature of any outcome measure. Thirty-day mortality, for the reasons noted, is the gold standard measure of quality used in almost all studies of provider quality. Had we not used 30-day mortality, we would have been criticized for using an insensitive measure susceptible to bias. After adequate adjustment for relevant patient covariates and hospital characteristics, differences noted in the 30-day mortality between directed and undirected cases suggest differences in the quality of care provided by anesthesia providers. Fourth, anesthesia mortality reflects practice that generally involves direction by an anesthesiologist, so that low estimates are in part a result of medical direction. Fifth, as we discussed in our report, there are many newer studies that suggest that anesthesia practice influences patient outcomes far beyond the immediate perioperative period, again suggesting that anesthesia mortality rates as cited by Obst underestimate anesthesia mortality.

Dr. Obst asserts that our definition of direction was flawed, without providing any evidence of bias. We defined direction to determine whether the presence of an anesthesiologist benefited the patient. There can be little doubt that using our definition of direction, undirected cases had vastly greater odds of having an undirected anesthetist involved with patient care than did cases defined as directed. This was a very large study, based on claims records of 194,430 directed and 23,010 undirected cases. Although we did acknowledge in our report that the potential for occasional billing misclassification is present in a study of this size, and we agree that a chart review study is the next logical step in this research, we also provided evidence that our estimates were not biased. For example, results were unchanged when restricted only to those cases with bills. Furthermore, to the extent that there was misclassification, such an effect may blur the distinction between provider groups, tending to underestimate the difference in mortality between the directed and undirected groups.

Could undirected resident cases have accounted for our findings? The evidence we reported points to the contrary. First, as was reported, the vast majority of resident cases were counted in the directed group, with at most only 5.6% of undirected cases possibly involving a resident—under the strong assumption that all “no-bill” cases at programs with residents were resident cases. Clearly, the actual percent of resident cases in the undirected group was far smaller than 5.6%. Second, as we stated in our report, when we estimated our results using only the cases with anesthesia bills (which had no resident cases in the undirected group but did have resident cases in the directed group), our results were unchanged. If there were any bias in this study resulting from resident performance, it would be that the difference between directed and undirected cases was underestimated.

Could the difference in results be caused by institutional differences or differences in postoperative care, not differences in directed or undirected anesthetic care? As we reported in numerous analyses, when we adjusted for each hospital individually in the modeling, we found our results to be unchanged. If differences in postoperative care were the cause of our observed differences in outcome, one must hypothesize that undirected patients somehow were sent to different recovery rooms, intensive care units, and surgical floors inside the same hospital than were their directed counterparts. Because these were all Medicare patients, this hypothesis seems implausible. The evidence of stable results after adjustment for individual hospitals supports the conclusion that the differences observed in our study were not caused by differences across institutional postoperative care.

Was inadequate risk adjustment the cause for the observed differences? Much of our paper was related to that question. Extensive risk adjustments, using Medicare data, were performed first. We then appended a well-recognized and validated physiologic-based admission severity score 6available by law for Pennsylvania hospital admissions and found our results to be unchanged. Furthermore, we saw that failure to rescue, a measure less influenced by errors in severity assessment, 7,8revealed equally concerning results. There was no evidence that inadequate risk adjustment was responsible for these results.

Were our results the result of multicollinearity within the logistic regression models? Dr. Obst is confused about multicollinearity. Multicollinearity among observed variables may explain why an important coefficient failed to reach statistical significance. It does not explain away a statistically significant coefficient, such as the one in our study.

Dr. Kleinman suggests that the results of our study “may not be related in any way to the practice of CRNAs per se”  and that the results may be due to medical direction by nonanesthesiologists. Our study clearly suggests that practice situations that include a directing anesthesiologist have lower mortality than situations that lack direction by an anesthesiologist. Undirected cases lacked evidence of direction and only rarely were labeled undirected because a nonanesthesiologist, such as a pathologist or an internist, directed their care. Dr. Kleinman apparently believes that when anesthesiologists are not present, ill-informed surgeons force anesthetists into making bad decisions. We cannot determine this from our study. What we do know is that the presence of an anesthesiologist was associated with lower mortality.

Dr. Orkin suggests that 1 excess death in 400, the difference between undirected and directed care found in our study, is important when compared with other medical interventions. We agree, but we believe Dr. Orkin’s analysis understates the importance of our findings by failing to account for the large numbers of potential cases affected by each intervention. Anesthesia practice influences the care of millions of patients annually. Hence, the odds ratio of 1.08 is very important when the potential number of exposed patients is so large.

Our analysis raises concerns about anesthesia care that lacks direction by an anesthesiologist. Future research, through the use of more detailed chart review studies, should explore why this difference in outcomes exists. Clearly, decisions based on evidence and data, rather than opinion and speculation, would serve the community well when making future staffing decisions.

The Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania. silberj@wharton.upenn.edu

1.
Silber JH, Kennedy SK, Even-Shoshan O, Chen W, Koziol LF, Showan AM, Longnecker DE: Anesthesiologist direction and patient outcomes. A nesthesiology 2000; 93: 152–63
2.
McKenzie RA: Congressional testimony before the Antitrust Subcommittee, Senate Judiciary Committee, June 7, 2000
3.
Hennen J, Krumholz HM, Radford MJ, Meehan TP: Mortality experience, 30-days and 365-days after admission for the 20 most frequent DRG groups among Medicare inpatients aged 65 or older in Connecticut hospitals, fiscal years 1991, 1992, and 1993. Conn Med 1995; 59: 137–42
4.
Eichhorn JH: Prevention of intraoperative anesthesia accidents and related severe injury through safety monitoring. A nesthesiology 1989; 70: 572–7
5.
Kosecoff J, Kahn KL, Rogers WH, Reinisch EJ, Sherwood MJ, Rubenstein LV, Draper D, Brook RH: Prospective payment system and impairment at discharge: The “quicker-and-sicker” story revisited. JAMA 1990; 264: 1980–3
6.
Steen PM, Brewster AC, Bradbury RC, Estabrook E, Young JA: Predicted probabilities of hospital death as a measure of admission severity of illness. Inquiry 1993; 30: 128–41
7.
Silber JH, Williams SV, Krakauer H, Schwartz JS: Hospital and patient characteristics associated with death after surgery: A study of adverse occurrence and failure-to-rescue. Med Care 1992; 30: 615–29
8.
Silber JH, Rosenbaum PR, Ross RN: Comparing the contributions of groups of predictors: Which outcomes vary with hospital rather than patient characteristics? J Am Statis Assoc 1995; 90: 7–18