To the Editor:
We read with interest the article published by McIsaac et al.1 entitled “Identifying Obstructive Sleep Apnea in Administrative Data: A Study of Diagnostic Accuracy.”
The authors utilized data collected by a Canadian academic health sciences network within a universal health insurance plan to study the validity of using diagnosis codes to reliably identify patients with obstructive sleep apnea (OSA) within administrative databases. The presence of any registered diagnostic code, procedure, or therapeutic intervention consistent with the presence of sleep apnea within 2 yr before surgery was used as a benchmark.
The authors should be commended for their thoughtful undertaking and their contribution toward improving methodology in the field of population-based sleep apnea research.
Moreover, the presented findings are convincing; insofar as various diagnosis and billing codes are not reliable in identifying patients with OSA. However, their interpretation as it extends to the value of database studies that have used these codes to determine OSA cohorts may not be valid.
First and foremost, the analysis uses specific data to test the authors’ hypothesis, which located in Canada may be substantially different than those including information from US hospitals utilized in the majority of OSA observational studies published to date.2,3 Indeed, next to such important differences such as a single-payer system versus a multipayer system, billing and coding practices have also been shown to be influenced by type of hospital, most importantly among for-profit versus non-for-profit hospitals.4 The difference in International Classification of Diseases, Ninth Revision (ICD-9) validity between datasets is also demonstrated in the results presented by authors as they show varying sensitivities and specificities for ICD-9 code 780.5 (“unspecified sleep apnea”) in the Ontario Health Insurance Plan versus the Discharge Abstract Database. Thus, the results presented in the study by McIsaac et al. may not be applicable to other databases, and each data source would require separate validation studies to determine its ability to reliably identify those with OSA.
Although it is likely that because of the deficiencies of current coding systems to identify OSA patients, only a fraction of those affected are detected; the biggest effect of this deficiency would be on the determination of the true prevalence of the problem. However, outcome analyses utilizing a cohort of OSA patients (representing all such patients or not) should be less affected by this problem, thus rendering results establishing OSA as a perioperative risk factor for adverse outcomes valid.
The authors’ statement that “researchers and knowledge consumers should approach such studies cautiously” is put into perspective by their finding that those patients labeled as “true positives” for OSA appeared to have the highest disease burden putting them at highest risks for adverse perioperative outcomes. However, patients labeled as having OSA in observational studies (using ICD-9 codes) will be a mixture of true and false OSA diagnoses. Therefore, we would expect this misclassification to bias the results of an observational study to the null, as the authors rightfully point out. Therefore, it is very well possible that any association found in observational studies will be an underestimation of the true effects. This would not only mean that the findings reported by McIsaac et al. do not necessarily invalidate previous observational studies in respect to OSA and perioperative outcomes but also mean that their effects may be even larger than suggested.
Finally, the authors extracted their reference standard from a cohort of patients who actually underwent a polysomnogram based on unspecified criteria and met diagnostic criteria based on the apnea hypopnea index. Although this makes sense as it is vital in OSA ascertainment, the authors fail to mention and discuss the limitation of undiagnosed OSA, a more crucial and overarching issue as it has been demonstrated that a significant part of surgical OSA patients is missed by surgeons and anesthesiologists.5 Next to the study by McIsaac et al., this limitation also affects all other (observational) studies in which OSA is diagnosed based on a previous decision to perform a polysomnogram. This limitation is also expected to bias results of previous studies to the null and further highlights the need for reliable data, e.g., in the form of a registry.
In conclusion, although the study by McIsaac et al. points toward considerable limitations associated with the use of diagnosis codes to identify OSA patients in a Canadian universal health insurance database, these findings neither negate results from previous database studies identifying OSA as a risk factor for adverse outcomes nor can they be extrapolated to other datasets without further testing. We, therefore, suggest that the more important implications of the study by McIsaac et al. are the call for more validation studies and the generation of more reliable data such as a national registry.
The authors declare no competing interests.