Improper endotracheal tube (ETT) positioning is frequently observed and potentially hazardous in the intensive care unit. The authors developed a deep learning–based automatic detection algorithm detecting the ETT tip and carina on portable supine chest radiographs to measure the ETT–carina distance. This study investigated the hypothesis that the algorithm might be more accurate than frontline critical care clinicians in ETT tip detection, carina detection, and ETT–carina distance measurement.
A deep learning–based automatic detection algorithm was developed using 1,842 portable supine chest radiographs of 1,842 adult intubated patients, where two board-certified intensivists worked together to annotate the distal ETT end and tracheal bifurcation. The performance of the deep learning–based algorithm was assessed in 4-fold cross-validation (1,842 radiographs), external validation (216 radiographs), and an observer performance test (462 radiographs) involving 11 critical care clinicians. The performance metrics included the errors from the ground truth in ETT tip detection, carina detection, and ETT–carina distance measurement.
During 4-fold cross-validation and external validation, the median errors (interquartile range) of the algorithm in ETT–carina distance measurement were 3.9 (1.8 to 7.1) mm and 4.2 (1.7 to 7.8) mm, respectively. During the observer performance test, the median errors (interquartile range) of the algorithm were 2.6 (1.6 to 4.8) mm, 3.6 (2.1 to 5.9) mm, and 4.0 (1.7 to 7.2) mm in ETT tip detection, carina detection, and ETT–carina distance measurement, significantly superior to that of 6, 10, and 7 clinicians (all P < 0.05), respectively. The algorithm outperformed 7, 3, and 0, 9, 6, and 4, and 5, 5, and 3 clinicians (all P < 0.005) regarding the proportions of chest radiographs within 5 mm, 10 mm, and 15 mm error in ETT tip detection, carina detection, and ETT–carina distance measurement, respectively. No clinician was significantly more accurate than the algorithm in any comparison.
A deep learning–based algorithm can match or even outperform frontline critical care clinicians in ETT tip detection, carina detection, and ETT–carina distance measurement.
Deep learning image classification techniques are changing the interpretation process in a range of radiology settings
It is unclear whether automated detection of a misplaced endotracheal tube can perform similarly to critical care clinicians
A deep learning–based algorithm developed using portable chest radiographs from 1,842 adult intubated patients can identify the endotracheal tube tip, carina, and endotracheal tube tip–to–carina distance with a measurement error of 2.6 mm, 3.6 mm, and 4.0 mm, respectively
The algorithm performed as well as, if not better than, 11 critical care clinicians in identifying these portable chest radiograph landmarks
Among adverse events associated with endotracheal intubation, improper endotracheal tube (ETT) positioning is frequently observed and potentially hazardous if not promptly recognized and managed.1–3 An ETT placed at a high position may lead to air leaks or injury to the vocal cords and possibly increase the risk of accidental, unplanned extubation. Conversely, a mainstem bronchus intubation can result in hyperinflation of the intubated lung with subsequent pneumothorax and atelectasis of the nonventilated lung. Physical examination alone is unreliable for assessing the depth of ETT insertion.1–4 In the neutral neck position, the optimal position of the ETT tip within the trachea is 3 to 7 cm above the carina, which is the reference point of the proper ETT position on portable chest radiographs.5 It is recommended to evaluate the ETT position using a portable chest radiograph immediately after endotracheal intubation.1,2,4,6–8 Given that radiologists are not always available at any time to read portable radiographs in the intensive care unit (ICU), timely interpretation of postintubation chest radiographs by critical care clinicians may improve the process of early decision-making.
The portable supine chest radiograph allows valuable information to be obtained without the risk of patient transport in the ICU.9,10 Compared with standard standing chest radiographs, the quality is inconsistent due to higher image noise, because portable supine chest radiographs are obtained without an antiscatter grid.10,11 The existence of the medical devices required for critical care (e.g., nasogastric tubes, pacemaker wires, and electrocardiogram cables) and anatomical structures (e.g., such as sternum, heart, and spines) may interfere with the reading of a portable chest radiograph to identify the precise ETT tip and carina locations. Nearly 40% of ICU patients are mechanically ventilated.12 Thus, an algorithm designed to detect the ETT tip and carina on portable chest radiographs may help identify a suboptimal ETT position, reduce associated complications, and improve the ICU workflow.
With recent advances in image processing, artificial intelligence and deep learning have been gradually introduced into respiratory medicine and critical care.13,14 Although some of these studies applied artificial intelligence and deep learning to recognize different pathologies (e.g., malignancy) on standard chest radiographs and computed tomography,13,15–18 only two reports in the literature have demonstrated the approaches and algorithm performance in identifying ETT malposition on portable supine chest radiographs.19,20 These deep learning solutions are trained with image classification (categorization) on the basis of the entire image. However, an approach using image classification without labeling the objects (i.e., the ETT and carina) on chest radiographs cannot localize the ETT and carina and is unlikely to accurately estimate the distance in between (i.e., the ETT–carina distance), potentially limiting its application and reliability in clinical settings.
In the study presented here, we developed a deep learning–based automatic detection algorithm detecting the ETT tip and carina on portable supine chest radiographs to measure the ETT–carina distance using pixel-level segmentation labels. This study investigated the hypothesis that the algorithm might be more accurate than frontline critical care clinicians in ETT tip detection, carina detection, and ETT–carina distance measurement.
Materials and Methods
Training Datasets
The entire study protocol was approved by the Institutional Review Board of National Cheng Kung University Hospital (A-ER-108-305; Tainan, Taiwan). The study was conducted in the National Cheng Kung University Hospital, a 1,300-bed medical center that offers first-line and tertiary referral services for 1.8 million people in southern Taiwan. In this study, 1,870 de-identified portable supine chest radiographs of 1,870 intubated adult patients receiving surgical ICU care between 2015 and 2018 were randomly retrieved from the imaging database in the Department of Radiology. Patient consents were waived by the Institutional Review Board. The images had been de-identified before we received the files, and thus the patient demographics were not provided. The files were exported in the Digital Imaging and Communications in Medicine format. The length and width of these images ranged from 2,517 to 3,032 pixels, including 1,279 images sized at 2,517 × 3,032 pixels, 538 images sized at 3,032 × 2,517 pixels, and 53 images sized variously between 2,517 × 2,517 and 3,032 × 3,032 pixels. The image files were split into 4 folds. The number of images was estimated and 4-fold cross-validation was used based on our internal pilot study results. During the internal pilot experiment, we used approximately 400 images and 4-fold cross-validation to evaluate the performance of the algorithm. Based on the pilot study results, we chose the 4-fold cross-validation strategy and estimated the number of chest radiographs to be approximately 1,800.
Using a self-developed image annotation software, two board-certified intensivists (Drs. Huang and Lai; 14 and 9 yr of experience as intensivists, respectively) read the chest radiographs together at the same time and in the same location and conducted manual labeling as the ground truth annotations. The distal ETT end was labeled by the quadrangle constituted by P1 to P4, and the tracheal bifurcation was labeled by the polygon formed by P5 to P13 (fig. 1A). If the distal ETT end or tracheal bifurcation could not be reliably identified at the discretion of two intensivists, the chest radiographs were eliminated from further processing. The numbers of chest radiographs eliminated from the 4 folds were 6, 6, 9, and 7, respectively. Thus, the chest radiograph numbers became 462, 461, 458, and 461 in the 4 folds, making a total of 1,842 images from 1,842 patients included in the training and cross-validation datasets. Across the 4 folds, a cranially misplaced ETT (i.e., the ETT tip more than 7 cm above the carina) was found in 101, 109, 91, and 99 chest radiographs, respectively; a caudally misplaced ETT (i.e., the ETT tip less than 3 cm above the carina) was found in 38, 45, 31, and 40 chest radiographs, respectively.
Architecture of the Deep Learning–based Automatic Detection Algorithm
The ETT tip, defined as the midpoint between P2 and P3, and the carina (P9) were selected as the feature points. In addition to the 13 points (P1 to P13) that labeled the distal ETT end and tracheal bifurcation, two ground-truth bounding boxes (used to define the location of the target objects; 48 × 48 pixels) with the ETT tip and carina at the center of each ground-truth bounding box were annotated.21–23 The detection algorithm aimed to find the masks of the distal ETT end and tracheal bifurcation and the detected bounding boxes of the ETT tip and carina on portable supine chest radiographs (fig. 1B).
The mask region–based convolutional neural network (Mask R-CNN) has been known for its effectiveness in object recognition and instance segmentation.21 In this study, a mask region–based convolutional neural network was trained to detect the distal ETT end and tracheal bifurcation masks through pixel-level segmentation of the two items. The mask region–based convolutional neural network algorithm for feature extraction was composed of 50-layer ResNeXt networks24 as the backbone architecture with a recently proposed feature pyramid network.22 During the inference step, only the masks with the maximal score for each class were preserved. A rule-based feature extraction method was performed to identify the feature points (i.e., the ETT tip and carina) as the postprocessing procedure, which was developed based on the preliminary evaluation of the algorithm performance. To obtain the exact locations of the ETT tip and carina, the masks and detected bounding boxes localized by the mask region–based convolutional neural network were used to supplement each other. The ETT tip location was preferentially determined based on the detected bounding box center. Alternatively, the lowest point of the distal ETT end mask was accepted as the ETT tip location when the detected bounding box could not be identified on chest radiographs. Regarding the carina, the detected bounding box center was the preferred carina location. However, if the detected bounding box center was ≥100 pixels (13.9 mm) away from the feature point obtained from the tracheal bifurcation mask, the mask result was preferred as the carina location. The final detected locations of the distal ETT end and tracheal bifurcation were displayed as overlays on images. A supplemental method section (Supplemental Digital Content 1, http://links.lww.com/ALN/C918) is available to explain the architecture more thoroughly. The architecture of the deep learning–based algorithm and the rules in the postprocessing procedure were consistent during the training process of the four models. The ETT–carina distance was converted from pixels to millimeters using the pixel size of 0.139 mm based on the Digital Imaging and Communications in Medicine image data.
Validation Datasets
The validation steps included internal 4-fold cross-validation and external validation. Of the 4 folds, a single fold was retained as the validation dataset for testing the model, and the remaining 3 folds were used as the training datasets. For example, the first model was trained using the second, third, and fourth folds and tested using the first fold—each of the 4 folds serves as the validation dataset exactly once during 4-fold cross-validation. The external dataset was collected from intubated patients transferred from 12 neighboring urban hospitals between 2018 and 2019, whose images had been uploaded into the imaging database on patient admission. Overall, 216 de-identified chest radiographs were retrieved as the external validation dataset from the imaging database in our Department of Radiology.
Observer Performance Test
Eleven healthcare workers in the ICU, including two senior ICU nurse practitioners, two postgraduate year residents, five surgical residents, and two board-certified intensivists, participated in the observer performance test after consents were obtained. In Taiwan, postgraduate year residents participate in a generalized training program, and the surgical residency comes after 2 yr of postgraduate year residency training. Each clinician independently reviewed the first fold of the original dataset and labeled the ETT tip and carina on each chest radiograph. To ensure the labeling quality, these clinicians were temporarily exempted from clinical work and received standardized hands-on training before using the annotation software. The performance of each clinician was compared with that of the first model.
Performance Metrics and Statistical Analysis
The performance metrics measured consisted of the accuracy of ETT tip detection, carina detection, and ETT–carina distance measurement. As shown in Supplemental Digital Content 2 fig. S1 (http://links.lww.com/ALN/C919), the accuracy of ETT tip and carina detection was evaluated using the detection error between the detected location and ground truth location. Likewise, the accuracy of ETT–carina distance measurement was assessed using the measurement error between the estimated distance and ground truth distance. With reference from a previous study,20 these performance metrics were further classified in terms of the errors from the ground truth within 5 mm, 10 mm, 15 mm, and beyond. In addition, whether the algorithm can detect a cranially misplaced ETT (i.e., the ETT tip more than 7 cm above the carina) and a caudally misplaced ETT (i.e., the ETT tip less than 3 cm above the carina) was evaluated. The overall performance of the algorithm in internal 4-fold cross-validation and external validation was calculated by pooling the results of the four individual models. During the observer performance test, the 462 chest radiographs in the first fold were used to compare the performance (i.e., the distribution of detection and measurement errors and proportions of chest radiographs within 5 mm, 10 mm, and 15 mm error from the ground truth) of the algorithm and clinicians.
A data analysis and statistical plan were written after the data were accessed. Statistical analyses were performed using SPSS Statistics for Windows, Version 17.0 (SPSS Inc., USA). A P value < 0.05 was considered statistically significant. Categorical variables were expressed as percentage (number), whereas continuous variables were expressed as median (interquartile range). For categorical variables, independent samples (i.e., comparisons in internal validation and internal validation results versus external validation results) were compared using the chi-square test and dependent samples (i.e., comparisons in external validation and observer performance tests) using the McNemar test, with the P value of multiple comparisons adjusted by the Bonferroni correction. For comparisons of continuous variables between two independent groups (i.e., internal validation results versus external validation results), the Mann–Whitney U test was used. For comparisons of continuous variables among multiple groups, independent samples (i.e., comparisons in internal validation) were compared using the Kruskal-Wallis test, and dependent samples (i.e., comparisons in external validation and observer performance tests) were compared using the Friedman test, followed by the post hoc analysis using Dunn’s test.
Results
The overall performance of the deep learning–based automatic detection algorithm is summarized in table 1. During internal 4-fold cross-validation, the median error (interquartile range) and overall proportions of chest radiographs within 5 mm, 10 mm, and 15 mm error of the deep learning–based algorithm were 2.8 (1.6 to 4.9) mm and 75.1%, 92.5%, and 96.4% in ETT tip detection, 3.6 (2.1 to 5.5) mm and 68.8%, 91.5%, and 95.6% in carina detection, and 3.9 (1.8 to 7.1) mm and 60.4%, 84.2%, and 92.8% in ETT–carina distance measurement, respectively. Among the four individual models, the performance (i.e., the median error [interquartile range] and proportions of chest radiographs within 5 mm, 10 mm, and 15 mm error from the ground truth) in ETT tip detection, carina detection, and ETT–carina distance measurement during internal 4-fold cross-validation was not significantly different (Supplemental Digital Content 3 table S1, http://links.lww.com/ALN/C920). During external validation, the median error (interquartile range) and overall proportions of chest radiographs within 5 mm, 10 mm, and 15 mm error of the deep learning–based algorithm were 3.0 (1.7 to 5.3) mm and 72.6%, 90.4%, and 95.3% in ETT tip detection, 3.5 (2.0 to 5.9) mm and 67.8%, 89.2%, and 95.9% in carina detection, and 4.2 (1.7 to 7.8) mm and 57.6%, 83.2%, and 92.6% in ETT–carina distance measurement, respectively. Compared with the performance in internal cross-validation, the overall proportions of chest radiographs within 5 mm, 10 mm, and 15 mm error and median error (interquartile range) from the ground truth in the three performance metrics were not significantly different, except a slight decline of median error (interquartile range) in ETT tip detection (2.8 [1.6 to 4.9] mm in internal cross-validation versus 3.0 [1.7 to 5.3] mm in external validation, P = 0.046). Thus, similar results were obtained during validation using the external dataset from neighboring hospitals. Among the four individual models, the accuracy of the three performance metrics was not significantly different during external validation (Supplemental Digital Content 4 table S2, http://links.lww.com/ALN/C921). For each model, the performance in ETT tip detection, carina detection, and ETT–carina distance measurement obtained during external validation (Supplemental Digital Content 4 table S2, http://links.lww.com/ALN/C921) was not significantly different from those obtained during internal cross-validation (Supplemental Digital Content 3 table S1, http://links.lww.com/ALN/C920).
Whether the deep learning–based algorithm can detect a cranially or caudally misplaced ETT was also evaluated. For chest radiographs with a cranially misplaced ETT (table 2), the sensitivity and specificity of the algorithm were 77.3% and 95.2% during internal 4-fold cross-validation and 72.1% and 95.3% during external validation, respectively. For chest radiographs with a caudally misplaced ETT (table 3), the sensitivity and specificity of the algorithm were 70.8% and 96.4% during internal 4-fold cross-validation and 69.3% and 96.6% during external validation, respectively.
During the observer performance test, the median error (interquartile range) of the algorithm in ETT tip detection was 2.6 (1.6 to 4.8) mm (fig. 2), significantly superior to that of six clinicians. The sensitivities of the algorithm at 5 mm, 10 mm, and 15 mm error from the ground truth or less were 77.1%, 92.9%, and 96.5%, respectively (table 4). Compared with the 11 clinicians, the algorithm had significantly higher sensitivities than 7, 3, and 0 clinicians at 5 mm, 10 mm, and 15 mm error or less. No clinician was more accurate than the algorithm in ETT tip detection.
For carina detection, the median error (interquartile range) of the algorithm (3.6 [2.1 to 5.9] mm) was significantly superior to that of 10 clinicians (fig. 2). The sensitivities of the algorithm at the error of 5 mm, 10 mm, and 15 mm or less were 67.5%, 90.0%%, and 95.0%, respectively (table 5). Compared with the 11 clinicians, the algorithm was significantly more sensitive than 9, 6, and 4 clinicians at the error of 5 mm, 10 mm, and 15 mm or less. No clinician was significantly more accurate than the algorithm in carina detection.
The results of ETT–carina distance measurement of the algorithm and clinicians are shown in Supplemental Digital Content 5 fig. S2 (http://links.lww.com/ALN/C922). For ETT–carina distance measurement, the median error (interquartile range) of the algorithm (4.0 [1.7 to 7.2] mm) was significantly superior to that of 7 clinicians (fig. 2). Of the algorithm, the proportions of chest radiographs within 5 mm, 10 mm, and 15 mm error from the ground truth were 59.3%, 84.4%, and 91.1%, respectively (table 6). In the comparisons with the 11 clinicians, the proportions of chest radiographs within 5 mm, 10 mm, and 15 mm error of the algorithm were significantly higher than those of 5, 5, and 3 clinicians. No clinician was significantly more accurate than the algorithm in ETT–carina distance measurement.
Discussion
In the current study, we aimed to develop an algorithm to localize the ETT tip and carina on chest radiographs and estimate the ETT–carina distance. The performance of the algorithm was compared with that of clinicians in ETT tip detection, carina detection, and ETT–carina distance measurement in an observer performance test. Of note, the algorithm did perform better than some clinicians, and no clinician was more accurate than the algorithm in any comparison (regardless of the distribution of errors or proportions of chest radiographs within 5 mm, 10 mm, or 15-mm error). Thus, although the clinical effects remain to be determined, the deep learning–based algorithm might play a role to complement and augment the ability of critical care clinicians by offloading their routine duties and enabling them to focus on cognitively demanding tasks.
Several study groups and companies have announced working on relevant projects. However, only Lakhani et al.19,20 documented the details of their approaches and results in two studies. Their deep learning–based algorithms were trained using image classification, i.e., category labeling for the entire image rather than annotation for specific objects. In their former study,19 the authors found that the deep convolutional neural networks achieved a relatively poorer area under the curve of 0.81 in differentiating the low or normal position of the ETT. In the latter study,20 the Inception V3 deep neural network was used to classify the ETT–carina distance. A total of 22,960 chest radiographs were classified into 12 categories, including bronchial insertion, distance from the carina at 1.0-cm intervals up to 10 cm (0.0 to 0.9 cm, 1.0 to 1.9 cm, …, 9.0 to 9.9 cm), and 10 cm or greater. The mean differences between the algorithm and radiologists in ETT–carina distance were 0.69 ± 0.70 cm on the internal test dataset and 0.63 ± 0.55 cm on the external test dataset, with both intraclass correlation coefficients greater than 0.8. On the internal test images, the algorithm was 66.5% sensitive and 99.2% specific in detecting ETT–carina distance greater than 7 cm and 95.0% sensitive and 91.8% specific in detecting ETT–carina distance less than 3 cm, respectively. Although Lakhani’s work is more sensitive in detecting a caudally misplaced ETT, our algorithm performs slightly better in detecting a cranially misplaced ETT. However, as acknowledged by the authors, an approach using such “weak” labeling needed substantially more training data. More important, the algorithms were trained through image classification (i.e., low or normal position of the ETT in the former study and 12 numerical categories of ETT–carina distance in the latter study). No accurate object annotation for the ETT and carina was made on training dataset images so that the deep learning solutions classified only the low or normal position of the ETT or ETT–carina distance. Therefore, if the clinicians have any suspicion of the ETT–carina distance reported by the algorithm, no localization information of the ETT and carina can be provided.
In the current study, we aimed to improve model explainability (i.e., transparency) using a deep learning–based object detection algorithm instead of image classification. The deep learning–based algorithm learned how to localize the ETT tip and carina on chest radiographs to estimate the ETT–carina distance. Although pixel-level segmentation labeling performed by two board-certified intensivists together was a time-consuming and labor-intensive task, the image annotations using 4 and 9 points each provided abundant information to recognize the distal ETT end and tracheobronchial tree on chest radiographs, substantially reducing the number of images in training datasets. In addition, complementary application of the mask and bounding box results may enhance ETT tip and carina detection and consequently contribute to the accuracy of ETT–carina distance measurement. The deep learning–based algorithm, trained using the bounding boxes denoting the ETT tip and carina locations and pixel-level segmentation of the distal ETT end and tracheal bifurcation, exhibited robustness in ETT–carina distance measurement during internal cross-validation and external validation. In addition, the overlays, which localize the distal ETT end and tracheal bifurcation on images, can help users perceive the ETT tip in relation to the carina (fig. 3), especially when a disagreement exists between the interpretation of clinicians and the detection of the algorithm.
For the deep learning–based automatic detection algorithm, ETT tip and carina detection was accurate to within a 10-mm error from the ground truth in ~90% of images and within 15-mm error in ≈95% of images. In addition, ETT–carina distance measurement was accurate to within 10-mm error in ≈85% of images and within 15-mm error in ≈90% of images. More important, the performance of the deep learning–based algorithm was consistent in internal 4-fold cross-validation and external validation. We compared the performance of the deep learning–based algorithm with that of a diverse group of 11 critical care clinicians. In terms of the median error (interquartile range) from the ground truth, the algorithm performed better than 6, 10, and 7 clinicians in ETT tip detection, carina detection, and ETT–carina distance measurement, respectively. The algorithm was superior to 7, 3, and 0, 9, 6, and 4, and 5, 5, and 3 clinicians regarding the proportions of chest radiographs within 5 mm, 10 mm, and 15 mm error in ETT tip detection, carina detection, and ETT–carina distance measurement. The algorithm outperformed clinicians in many comparisons, particularly when a lower error (i.e., 5 mm) from the ground truth was allowed. No clinician was significantly more accurate than the algorithm in terms of the sensitivities within 5 mm, 10 mm, and 15 mm error or median error (interquartile range) from the ground truth. These findings suggested that the deep learning–based automatic detection algorithm can match or even outperform frontline critical care clinicians in measuring the ETT–carina distance. Whether clinical use of the algorithm might reduce complications associated with ETT malposition and improve the ICU workflow warrants further investigation.
The current study has some limitations. First, in the observer performance test, only the performance of the first model was compared with that of the clinicians. The performance of the four individual models was not significantly different during internal 4-fold cross-validation and external validation. However, it is not equivalent to or in place of comparing the other three individual models with clinicians. Second, the possibility of overfitting cannot be avoided considering that a rule-based feature extraction method was used as the postprocessing procedure in identifying the ETT tip and carina. Finally, the performance of our algorithm cannot be compared comprehensively with previous works. The algorithms presented in previous studies were trained using image classification,19,20 and thus the area under the curve and intraclass correlation coefficients are used as evaluation metrics. However, in a detection task like our work, using the area under the curve or intraclass correlation coefficients to evaluate the algorithm performance tends to discretize continuous variables, leading to a loss of information. Also, evaluation using different testing datasets could result in biased comparisons. A standard regarding the performance evaluation for relevant studies remains lacking. Thus, conducting an observer performance test using the same dataset may be a more feasible and direct approach to identify whether the algorithm works before further validation.
In summary, we have developed a deep learning–based automatic detection algorithm detecting the ETT tip and carina on portable supine chest radiographs to measure the ETT–carina distance. Our study demonstrates that the deep learning–based algorithm is comparable or even superior to frontline critical care clinicians in detecting the ETT tip and carina and measuring the ETT–carina distance.
Acknowledgments
The authors appreciate Cheng-Shih Lai‚ B.S.‚ at the Department of Radiology‚ National Cheng Kung University Hospital‚ Taiwan‚ for his excellent technical support and Kai-Wen Li‚ M.S.‚ at the Department of Nursing‚ National Cheng Kung University Hospital‚ Taiwan‚ for her laborious contribution to this work.
Research Support
Support for article research was provided from the Ministry of Science and Technology, Executive Yuan, Taiwan (MOST 109-2634-F-006-023) and from National Cheng Kung University Hospital, Tainan, Taiwan (NCKUH-10901003).
Competing Interests
Dr. Lai received support for article research from the Ministry of Science and Technology, Executive Yuan, Taiwan (MOST 109-2634-F-006-023) and from National Cheng Kung University Hospital, Tainan, Taiwan (NCKUH-10901003). The other authors declare no competing interests.
Supplemental Digital Content
Supplemental Digital Content 1: Supplemental method, http://links.lww.com/ALN/C918
Supplemental Digital Content 2: Fig. S1, http://links.lww.com/ALN/C919
Supplemental Digital Content 3: Table S1, http://links.lww.com/ALN/C920
Supplemental Digital Content 4: Table S2, http://links.lww.com/ALN/C921
Supplemental Digital Content 5: Fig. S2, http://links.lww.com/ALN/C922