To the Editor:—
Despite a seemingly robust mention of study limitations, Peterson et al. 1apparently do not fully appreciate the limitations of their data, use inappropriate statistical methods, and provide misleading results.
Let’s start with the results, then work backward toward an fuller understanding of the limitations. For brevity, focus on the largest odds ratio. The abstract states, “The odds of death/brain death were increased by the development of an airway emergency (odds ratio, 14.98; . . . P < 0.001).” Because the data set is limited to patients whose care already has resulted in a closed professional liability insurance claim, a more accurate statement would qualify the reported odds ratio as relating to patients whose care has resulted in a closed claim. Because anesthesia practitioners cannot know prospectively which of their patients will result in a closed claim, the odds ratio is bizarre, if not nonsense. (That an airway emergency in the context of a difficult airway is associated with death/brain death makes clinical sense but does not validate generating a nonsensical odds ratio.)
Accounting for this bizarre situation is the use of an inappropriate statistical method: logistic regression is a predictive modeling technique that has meaning only when applied to a population at risk—in this case, patients who are about to have anesthesia care—before adverse events occur. However, the American Society of Anesthesiologists Closed Claims Project is comprised of only that subset of patients whose care has resulted in a claim. Using a predictive method here is akin to predicting winners of a horse race as horses cross the finish line. Using a predictive method to attempt an explanation of the diversity of injuries among closed claims (i.e. , those with an array of injuries) is thus inappropriate.
The authors (and many readers) may call attention to the explicit mention in the Limitations section that the authors are well aware of “the lack of denominator data.” One hears that phrase often in relation to the authors’ data set, as if it is the sole or even principal study design issue. A very recent editorial notes that rapid progress toward comprehensive clinical databases means that “[w]e may ultimately be able to have the denominator for the events that had been brought to our attention through the closed-claim studies . . . [and such data] will allow us to close the loop on how we care for patients and their outcomes.”2Surely, if such denominator data were available, the authors would have used them—and their results would still be misleading.
We are ready to recognize the severe limitations of the authors’ type of registry data. However well-structured and comprehensive, the data set created by the American Society of Anesthesiologists Closed Claims Project is no better than a case series, a study design whose reliability and accuracy in reflecting the universe of patients who experience injury allegedly related to anesthesia is likely to be as poor as other case series. Two landmark medical negligence studies3–7conducted in three states, 10 yr apart, offer a remarkably consistent and relevant perspective on malpractice data: (1) patient injury resulting from negligence occurs in approximately 1% of hospitalizations; (2) patients file malpractice claims in only a small proportion of hospitalizations (0.12–0.16% in these studies); and (3) only a small fraction of the patients who do file have actually experienced an injury resulting form negligence. As a result, the U.S. medical liability system has been characterized as a very biased lottery. The well-known contingency-based payment system encourages plaintiff’s attorneys to accept cases that represent potentially more lucrative awards and that are judged to be more likely to be won. In addition, there are other, more subtle, biases in case selection. Particularly ingenious is the way that these lawyers are now responding to the caps on the noneconomic losses (i.e. , pain and suffering); they switch to an alternate theory of economic damages related to the patient’s lost earned-income potential, biasing the lawyer to accept the cases of more highly compensated patients.8Thus, cases in the authors’ data set reflect what is termed biased selection . The presence of such bias means that the injury-related claims that do progress through our legal system cannot be regarded as a random or even representative sample of the universe of such injuries. Thus, relationships that may be identified in the study sample cannot be used to infer accurately about phenomena in the universe of injuries.
Why are these issues surfacing now in an illustrious, 20-yr-old project with a substantial publication trail? The issues are not new, although grounds on which one can discuss the deficiencies have changed over time. Closed-claims studies have morphed from biased assessments of appropriateness of care—as a blinded reviewer of an early manuscript, I suggested that substantial bias would be involved in assessments when outcome severity was known to claims reviewers, which was subsequently documented in a simulation9—through increasing accretion of largely inappropriate statistical analyses. Most reports arising from this data set have relied heavily on hypothesis testing (i.e. , statistical tests yielding a P value) that is suspect, particularly that involving comparisons of events across time periods and types of injuries, because of the biased selection. Logistic regression analysis seems to have been used inappropriately by the closed-claims investigators as early as 1999,10when such analysis became as easy as a few computer clicks. Statisticians responsible for logistic-regression algorithms note: “As is well-recognized in the statistical community, the inherent danger of this easy-to-use software is that investigators are using a very powerful tool about which they may have only limited understanding.”11Finally, what appears in a journal is heavily influenced by reviewers who may have limited technical expertise, suggesting that clinical journals should have statisticians and/or clinical epidemiologists on retainer.
As a result of these issues, statistical tests should be used very sparingly, if at all, with the authors’ data. Instead, the American Society of Anesthesiologists Closed Claims Project data should be exploited for its true value: offering rich, often unique, albeit qualitative descriptions of various complications.
The Pennsylvania State University College of Medicine, Hershey, Pennsylvania. email@example.com