We appreciate the interest in our publication1 and the opportunity to respond to these two Letters to the Editor.
Dr. Pivalizza and colleagues have questions about our methodology and inclusion criteria, and we would like to clarify. Their first question related to not accounting for those residents who did not take the in-training examination in their clinical base year in the analysis. There were actually two different models employed in the analysis of changes in in-training examination scores from the clinical base year to the clinical anesthesia year 1, and from the clinical anesthesia year 1 to year 2. The latter analysis (and our main conclusion) did not depend upon whether the residents had taken the in-training examination during their clinical base year. Second, given the study question of in-training examination score increment, residents who did not take the in-training examination in both clinical anesthesia years 1 and 2 could not be analyzed, and concerns were raised regarding the possibility of those who had failed the BASIC examination leaving training before taking the in-training examination in their clinical anesthesia year 2, thus biasing the composition of the cohort. We note that three failures of the BASIC examination are required for mandatory extension of training, and that for the 2013 cohort, only 0.2% failed twice. Thus, we think it is unlikely that this factor significantly affected the analysis. Dr. Pivalizza and colleagues also question whether preparing for the BASIC examination may have distracted residents from preparing for the preceding in-training examination, lowering in-training examination performance at clinical anesthesia year 1 and biasing toward an increase in performance from clinical anesthesia year 1 to year 2. As shown in table 1 and figure 2 of our article,1 there is no evidence that the introduction of the staged examination system in the 2013 cohort was associated with lower in-training examination scores at clinical anesthesia year 1; indeed, the 2014 cohort had higher in-training examination scores at clinical anesthesia year 1. Finally, it is our perspective that what constitutes a “small” effect size is a matter of interpretation. The in-training examination performance of clinical anesthesia year 2 residents after the introduction of the staged examination system was similar to that of clinical anesthesia year 3 residents in the traditional examination system; we leave it to the readers to judge the significance of this finding.
Dr. Berman is concerned with “exam fatigue” associated with the introduction of new examination components in the primary certification process, and its potential to contribute to psychologic distress in residents. We appreciate his raising this important issue, given that a variety of studies have shown that residents in training can exhibit high levels of stress and burnout.2,3 Each of the physician directors of the American Board of Anesthesiology is a practicing anesthesiologist, well aware of the demands of training and practice. Consideration of the impact of changes in the certification process on residency training is an essential factor in American Board of Anesthesiology decisions. Dr. Berman questions the clinical significance of improved in-training examination performance. Our prior work has shown that in-training examination performance is a significant predictor of achieving timely board certification,4 and that board certification (or rather the lack thereof) predicts relevant outcomes such as disciplinary actions against the medical licenses of anesthesiologists.5 Nonetheless, we agree that our goal should always be focused on improving patient care, not on test scores per se. This study focused on whether knowledge acquisition was accelerated with the advent of the BASIC examination, not on whether the ultimate clinical performance of residency graduates is improved (an important question that remains to be answered in future research). We very much agree that changes such as the staged examination system, including the introduction of the Objective Structured Clinical Examination (also mentioned by Dr. Berman), require continued evaluation. As evidenced by this and other publications, the American Board of Anesthesiology is committed to ongoing rigorous and transparent analyses of its systems and processes. These analyses include evaluation of the unintended consequences on our trainees and, ultimately, on the abilities of anesthesiologists to provide excellent patient care. Such analyses will be essential to the consideration of any future system and process modifications desired to better meet the goal of fulfilling the American Board of Anesthesiology’s mission to advance the highest standards of the practice of anesthesiology. We thank the authors of the letters for their comments, and we welcome further feedback from the community of anesthesiologists whom we serve.
Drs. Sun and Zhou are staff members of the American Board of Anesthesiology; Drs. Keegan and Warner are American Board of Anesthesiology directors and receive a stipend for their participation in American Board of Anesthesiology activities; Dr. Lien is a former director of the American Board of Anesthesiology.