To the Editor:
With regard to the recent Anesthesiology article by Biais et al.,1 we acknowledge the overall quality of the report and consider the relevancy of the topic of underlying research. However, we have found several methodologic concerns that we would like to address.
First, Standards for Reporting Diagnostic Accuracy Studies have been developed as a list of items2,3 that contribute to the completeness, transparency, and quality of reporting of diagnostic accuracy studies. We found that key items were lacking in the study from Biais et al.1 The study should have included a degree of blinding that describes whether clinical information and index test results were available to the assessors of the reference standard. A flow diagram is also required to evaluate the risk of selection bias. The reproducibility of the index test and the reference standard should also have been reported. Moreover, the studies cited in the article that support the rationale of the reference standard and its cutoff do not to support an increase of 10% of stroke volume after volume expansion measured by proAQT (Pulsion Medical Systems, Germany) that defined fluid responders.
Second, the threshold to differentiate between responders and nonresponders should be chosen above and close to the least significant change (LSC) of the stroke volume measurement by their considered device. The LSC is defined as the minimum change that can be recognized as a significant change, not a measurement of random variation. Although LSC has been reported previously with transpulmonary thermodilution,4 no data were reported using the proAQT system. Therefore, LSC for the proAQT system should have been calculated and reported by the authors. Because there was no threshold of stroke volume variation after a volume expansion to differentiate responders and nonresponders that can be supported by a solid clinical or physiologic background, another strategy would have been to provide data for several thresholds.5 To address this last point, we collected data of the scatterplot given in figure 2 of the article by Biais et al.1 using the software ImageJ (https://imagej.nih.gov/; open source, National Institutes of Health, Bethesda, Maryland). This allowed us to recover raw data of variations of stroke volume after lung recruitment maneuver and after volume expansion and enabled us to perform subsequent analysis. We explored 16 thresholds between 5% and 20% using the R software and pROC package (https://www.r-project.org/; R-3.3.0; accessed May 3, 2016). We computed 95% CI using the bootstrap technique with thousand repetitions. In our point of view, the area under the receiver operating characteristic curve was overestimated with the chosen threshold of 10% (fig. 1). Because the threshold increases beyond the LSC of the measurement system, the area under the receiver operating characteristic curve should remain constant or increase with the threshold, which was not the case in the study by Biais et al.1 For all of these reasons, we strongly suspect that some recruitment biases could have occurred.
Finally, some studies have previously evaluated the diagnostic accuracy of a transient positive end-expiratory pressure elevation, identified as a recruitment maneuver, to diagnose preload responsiveness. Such diagnostic approaches were similar to those proposed by the authors and should have been discussed.6–8 Diagnostic studies are at a high risk for biases,9 and the methodologic considerations above highlight a risk of bias in the study by Biais et al.1
The authors declare no competing interests.