“Remember that all models are wrong; the practical question is how wrong do they have to be not to be useful.” —George E. P. Box
To the Editor:
In a recent article, Kheterpal et al.1 analyzed the impact of a real-time intraoperative decision support system. Borrowing tactics from the aviation industry, the authors hypothesized that “decision support systems, which integrate across disparate data sources, devices, and contexts, to highlight and recommend specific interventions” might lead to better postoperative outcomes. For now, the authors showed that these systems did improve process measures, but the clinical outcomes were lacking. These results are not surprising.
In the field of data science, researchers understand that the “curse of dimensionality” is lurking behind every hypothesis.2 Here, the introduction of additional dimensions waters down the “relative contrast” of each data point and they become clustered together. One is no longer looking at data points on an x,y plane and one cannot differentiate the “distance,” or significance, of each point.3 As a result, one may observe a statistical significance when analyzing the data in its entirety, but in reality, it may only be in a subset of data points. Further, the aggregation of large amounts of data may inadvertently create a collection of irrelevant, correlated, or redundant data, interfering with any subsequent analyses.3 For example, heart rate and blood pressure are commonly inversely related and correlated to a different degree depending on the scenario. The variation in correlation forces the researcher to account for these differences when analyzing the data. Finally, when patterns are uncovered with insufficient data, the model may have statistical significance, but the overall utility/effect size may not justify the means.4
When pharmaceutical companies identify potential new drugs, they circumvent the issue of dimensionality by first taking time to research and identify specific targets (the variable), compare “the target” against several thousand compounds, and then take time to understand why each compound was effective before moving forward to further testing.5 By contrast, this study was based on previous observational studies that found associations between intraoperative physiologic management and postoperative outcomes in four dimensions out of numerous variables.1 In contrast to pharmaceutical companies, not taking the time to fully understand why each “target” was effective can lead to unexpected results. For instance, it would be interesting to understand how one or two extra minutes of hypotension can increase the risk of myocardial injury or renal injury, especially when this scenario is compared with the intensive care unit setting where the response to hypotension is often more delayed.
Ultimately, we are not saying that AlertWatch is ineffective at what it does or how it helps anesthesiologists. This use of airplane technology has some applicability in our practice, but the intervention made to improve patient outcomes needs to reflect the patient’s physiologic complexity. The problem with adding dimensions is that it results in an exponential increase in data needed to make accurate generalizations. In other words, as we add more dimensions to analyze a system, we are increasing the chances that a pattern is found, but we may find it more difficult to demonstrate effectiveness—the crux of “the curse dimensionality.”3 The key is to understand the difference between the models and reality and to harness and continually refine the opportunities afforded by large databases.
The authors declare no competing interests.