Background

The American Board of Anesthesiology recently introduced the BASIC Examination, a component of its new staged examinations for primary certification, typically offered to residents at the end of their first year of clinical anesthesiology training. This analysis tested the hypothesis that the introduction of the BASIC Examination was associated with an acceleration of knowledge acquisition during the residency training period, as measured by increments in annual In-Training Examination scores.

Methods

In-Training Examination performance was compared longitudinally among four resident cohorts (n = 6,488) before and after the introduction of the staged system using mixed-effects models that accounted for possible covariates.

Results

Compared with previous cohorts in the traditional examination system, the first resident cohort in the staged system had a greater improvement in In-Training Examination scores between the first and second years of clinical anesthesiology training (by an estimated 2.0 points in scaled score on a scale of 1 to 50 [95% CI, 1.7 to 2.3]). By their second year, they had achieved a score similar to that of third-year clinical anesthesiology residents in previous cohorts. The second cohort to enter the staged system had a greater improvement of the scores between the clinical base year and the first clinical anesthesiology year, compared with the previous cohorts.

Conclusions

These results support the hypothesis that the introduction of the BASIC Examination is associated with accelerated knowledge acquisition in residency training and provides evidence for the value of the new staged system in promoting desired educational outcomes of anesthesiology training.

What We Already Know about This Topic
  • The American Board of Anesthesiology recently changed the examination process for primary certification from a single written exam after completing a four-year anesthesiology residency, to a staged examination system, with a BASIC Examination at the end of the clinical anesthesia year 1, and an ADVANCED Examination after completion of four-year residency training

  • The goal was, in part, to accelerate the trajectory of resident knowledge acquisition

What This Article Tells Us That Is New
  • This investigation tested the hypothesis that introduction of the BASIC Examination was associated with accelerated knowledge acquisition during residency training, as measured by increments in annual resident scores on the In-Training Examination

  • Compared with previous cohorts in the traditional examination system, the first resident cohort in the staged examination system had a 2-point (on a scale of 1 to 50) greater improvement in In-Training Examination scores between the first and the second years of clinical anesthesiology training

THE American Board of Anesthesiology (ABA; Raleigh, North Carolina) recently transitioned to a staged examination system for its primary certification process. In the former (“traditional”) system, physicians were required to pass a written examination (Part 1) and an oral examination (Part 2) after completing a four-year continuum of education in anesthesiology. This continuum consists of a clinical base (CB) year and 36 months of approved training in clinical anesthesia (CA-1, CA-2, and CA-3 years). Some anesthesiology residency programs provide the full four-year training (“categorical” programs), and others provide only the CA-1 to CA-3–year training (“advanced” programs) with the CB year being completed elsewhere. In categorical programs, residents may also join as CA-1 residents after completing their CB year elsewhere.

In the new staged examination system, the written examination is administered in two components: a BASIC Examination, typically taken at the end of the CA-1 year, and an ADVANCED Examination, taken after completion of residency training. Residents must pass the BASIC Examination before completing residency training and must pass the ADVANCED Examination to be eligible for the oral examination (now denoted as the APPLIED Examination). The rationale for creating a staged examination system with three components (BASIC, ADVANCED, and APPLIED) was multifactorial: (1) providing an incentive for residents to develop positive study habits early in their training, (2) encouraging them to focus their early phases of study on content areas that provide the foundation for later training, and (3) providing a means by which program directors could identify residents who need extra attention early in their training. Individuals who began CB training in July 2012 (and thus began CA-1 training in July 2013) or later are required to complete the new staged examinations.

If the new system functions as intended, the trajectory of resident knowledge acquisition should be accelerated. Like many other specialty boards, the ABA offers an In-Training Examination (ITE) to evaluate a resident’s progress toward meeting the educational objectives of residency training. The ABA ITE is an electronic, single-best-answer multiple choice examination administered each February/March to residents at all levels of training.1  It shares the same content outline as the written examinations for primary certification. These examinations have a blueprint that specifies the number of questions from different content categories. The ABA ITE covers both BASIC and ADVANCED topics, including approximately equal proportions of items addressing each category of topics.

In any medical specialty, improvements in ITE scaled scores are typically observed as residents progress along the continuum of training, presumably reflecting knowledge acquisition during residency training. The predictive validity of an ITE for performance on subsequent certification examinations has been demonstrated in many specialties,2–8  including anesthesiology.9 

This study aimed to test the hypothesis that the introduction of the BASIC Examination was associated with an acceleration of knowledge acquisition during residency training, as measured by increments in annual ITE scores. This aim was accomplished by comparisons of longitudinal ITE performance in resident cohorts before and after the introduction of the staged examinations, adjusting for other factors of importance to such performance.

This study was deemed exempt from review by the Mayo Clinic Institutional Review Board (Rochester, Minnesota).

Study Population

This study included all residents who started their first year of clinical anesthesiology training (i.e., CA-1) in an Accreditation Council for Graduate Medical Education–accredited anesthesiology residency program between 2011 and 2014. A cohort was defined by the year when the residents started CA-1 training. The 2013 cohort (consisting of residents who started CA-1 training in July 2013 and CB training in July 2012) was the first that participated in the new staged examination system and was the first to take the BASIC Examination (in July 2014). The cohorts and the timing of the examinations are depicted in figure 1. Only residents who maintained a regular progression of training levels during 2012–2015 ITE administrations (fig. 1) were included in this analysis. The final study population included 6,488 residents from 141 training programs, representing the last two cohorts in the traditional examination system (2011 and 2012 cohorts) and the first two cohorts in the staged examination system (2013 and 2014 cohorts). The study was based on all available population data, and no a priori power analysis was conducted.

Fig. 1.

Flow diagram for four resident cohorts. CA = clinical anesthesia year; CB = clinical base year; ITE = In-Training Examination.

Fig. 1.

Flow diagram for four resident cohorts. CA = clinical anesthesia year; CB = clinical base year; ITE = In-Training Examination.

Close modal

Primary Outcome

The ITE scores are reported on a scale of 1 to 50. Resident performance in each year of training was measured by individual ITE scaled scores from that year. Since 2012, the scale has been calibrated using the item response theory equivalent groups equating method,10  with the 2011 CA-3 residents who graduated from U.S. medical schools (American medical graduates, AMGs) as the base reference group. The scores for the 2012–2015 administrations were equated based on the assumption that the overall ability (as reflected by the mean and SD of the scaled scores) remained the same among CA-3 residents who were AMGs in the years 2011–2015. In other words, the performance of these CA-3 residents in 2012–2015 was calibrated against the performance of the base reference group of 2011 CA-3 residents, then used to scale the scores for other resident years (i.e., CB, CA-1, and CA-2). Given this equating assumption, it is not possible to compare ITE performance in the CA-3 year among cohorts.

Study Design

Changes from CA-1 to CA-2.

Three cohorts were compared to determine how the ITE scores changed from CA-1 to CA-2 (fig. 1): the 2011 and 2012 cohorts (traditional examination system) and the 2013 cohort (staged examination system).

Changes from CB to CA-1.

Three cohorts were compared to determine how the ITE scores changed from CB to CA-1 (fig. 1): the 2012 cohort (traditional examination system) and the 2013 and 2014 cohorts (staged examination system). A limitation of this analysis is that more than half of the residents in each cohort did not have an ITE score in their CB year as they may not have had the opportunity to take the ITE in anesthesiology that year. For example, a physician who completed a transitional year in a separate internal medicine program before beginning anesthesiology training as a CA-1 resident may not have taken a CB-year ITE.

Statistical Analyses

Changes in the ITE score from CA-1 to CA-2 and from CB to CA-1 were analyzed separately using mixed-effects models, using data from those residents with ITE scores available at each year of training. For each analysis, the mixed-effects model included a random intercept, a random slope, a fixed cohort effect, and an interaction term of slope by cohort to test for slope differences among the cohorts. To facilitate comparisons, performance of the 2012 cohort was used as the reference in both models.

Sex and medical school country (AMGs vs. international medical graduates [IMGs]) were considered a priori as covariates in the analyses based on previous studies demonstrating that they influence written examination performance in anesthesiologists9,11 : on average, men perform better than women on the written component of the primary certification examination and on the Maintenance of Certification in Anesthesiology Program examination, and AMGs perform better than IMGs on the written examination for primary certification. Specifically, two additional fixed effects were included to test for the effects of the covariates on the intercept, and two additional interaction terms (slope by sex and slope by medical school country) were included to test for the effects of the covariates on the slope. Covariate effects were assumed not to differ among the cohorts. To account for correlations among residents enrolled in the same training program, a program-level random effect was added in the models to produce robust standard errors.

Given the large proportion of residents who did not take the ITE during their CB year, a two-way analysis of variance was conducted to examine how CA-1–year ITE scores differed for those who did and did not take a CB-year ITE in each of the 2012–2014 cohorts. In this 2 × 3 ANOVA, whether having a CB-year ITE score (yes or no) and cohort were the independent variables (an interaction effect was also included), and a CA-1–year ITE score was the dependent variable. In post hoc comparisons, the Tukey honest significant difference test was used to compare those who took a CB-year ITE and those who did not for each of the three cohorts.

A P value less than 0.05 was considered to indicate statistical significance. All statistical analyses were performed in R version 3.3.1 (R Foundation for Statistical Computing, Austria).

The four cohorts of residents analyzed had a similar proportion of women physicians (34 to 37%, table 1) and a similar proportion of IMGs (11 to 14%).

Table 1.

Demographic Characteristics and ITE Scaled Scores of the Study Population (N = 6,488)

Demographic Characteristics and ITE Scaled Scores of the Study Population (N = 6,488)
Demographic Characteristics and ITE Scaled Scores of the Study Population (N = 6,488)

Changes in ITE Scores from CA-1 to CA-2

In the mixed-effects model for changes in the ITE scaled score from the CA-1 to CA-2 years (table 2), the three cohorts being compared had a similar intercept (i.e., estimate of mean scaled score) in the CA-1 year but differed from each other in the slope (i.e., estimate of mean change in scaled score from the CA-1 to CA-2 years). There was a small but significant difference in slope for the 2011 and 2012 cohorts, with the improvement in scores from CA-1 to CA-2 being less in 2011 compared with 2012 (table 2, fig. 2). Thus, the improvement in scores was slightly greater in the 2012 cohort compared with the 2011 cohort. For the 2013 cohort (the first cohort in the staged examination system) the CA-1 to CA-2 slope was markedly higher (by an estimated 2.0 points [95% CI, 1.7 to 2.3]) compared with the 2012 cohort, indicating a significant year-to-year improvement in ITE performance after the introduction of the staged examination system (table 2, fig. 2). This represents a 44% difference in the slope estimate when comparing the 2012 (traditional) to the 2013 (staged) cohort and an improvement in the absolute scaled score estimate in the CA-2 year from 33.7 to 35.7. For the 2013 cohort, the CA-2 scaled score estimate (35.7) approximated the mean score of the CA-3 calibration group for that examination administration (35.4, fig. 2). Thus, the CA-2 residents in the first staged examination cohort (2013) performed at a level similar to that of the CA-3 residents in the last traditional examination cohort (2012) on the 2015 ITE.

Table 2.

Results from Mixed-effects Model for CA-1 to CA-2 Change

Results from Mixed-effects Model for CA-1 to CA-2 Change
Results from Mixed-effects Model for CA-1 to CA-2 Change
Fig. 2.

ITE scores by training level for each cohort. Also shown with the dashed line is the anticipated level for American medical graduates in the CA-3 year. Small discrepancies between this level and actual CA-3 scores for the 2011 and 2012 cohorts are caused by the influences of international medical graduates, whose data are included in the actual CA-3 scores. CA = clinical anesthesia year; CB = clinical base year; ITE = In-Training Examination.

Fig. 2.

ITE scores by training level for each cohort. Also shown with the dashed line is the anticipated level for American medical graduates in the CA-3 year. Small discrepancies between this level and actual CA-3 scores for the 2011 and 2012 cohorts are caused by the influences of international medical graduates, whose data are included in the actual CA-3 scores. CA = clinical anesthesia year; CB = clinical base year; ITE = In-Training Examination.

Close modal

Two covariates were included in the mixed-effects model because they are known from previous work to affect ITE scores.9,12  In the analysis of these covariates, women residents scored significantly lower than men residents in the CA-1 year, but both sexes improved similarly from the CA-1 to CA-2 years (table 2). Compared with AMGs, IMGs scored higher in the CA-1 year but showed less improvement in scores from the CA-1 to CA-2 years.

Changes in ITE Scores from CB to CA-1

For the 2012, 2013, and 2014 cohorts, the numbers of residents who took the ITE in their CB year were 581 (38%), 761 (48%), and 835 (48%), respectively. In the mixed-effects model that included only those residents who had both CB and CA-1 scores available (table 3), the 2013 cohort had a similar intercept (i.e., estimate of mean scaled score in the CB year) and a similar slope (i.e., estimate of mean change in scaled score from the CB to CA-1 years) to those of the 2012 cohort. In contrast, the 2014 cohort scored 1.2 points (95% CI, 0.7 to 1.6) lower in the CB year compared with the 2012 cohort, but their scores improved by 3.2 points (95% CI, 2.8 to 3.7) more from CB to CA-1 compared with the 2012 cohort (fig. 2). Thus, by the second year of staged examinations implementation, there was evidence for greater improvement in the ITE scores from the CB to CA-1 years.

Table 3.

Results from Mixed-effects Model for CB to CA-1 Change

Results from Mixed-effects Model for CB to CA-1 Change
Results from Mixed-effects Model for CB to CA-1 Change

In the analysis of these covariates, women residents scored significantly lower than men residents in the CB year, but both sexes improved similarly from the CB to CA-1 years (table 3). Compared with AMGs, IMGs scored higher in the CB year but showed less improvement in scores from the CB to CA-1 years.

In the analysis comparing those who did and did not take the ITE during the CB year, there was a significant interaction effect of taking the CB-year ITE and cohort on CA-1 ITE scores (P = 0.01, fig. 3). For the 2012 cohort, taking the ITE in the CB year did not affect the CA-1 ITE scores (P = 0.16). However, for the 2013 and 2014 cohorts, taking the ITE in the CB year was associated with an improvement of CA-1 scores by 0.8 and 1.7 points, respectively (P = 0.007 and P < 0.0001). This suggests that improved ITE performance during the CA-1 year observed in the 2014 cohort could be attributed to those residents who took the ITE during their CB year.

Fig. 3.

CA-1–year ITE scores for the 2012–2014 cohorts who did and did not take CB-year ITE. Error bars represent 95% CI. **P < 0.01 and ***P < 0.001 for pairwise comparisons. CA = clinical anesthesia year; CB = clinical base year; ITE = In-Training Examination.

Fig. 3.

CA-1–year ITE scores for the 2012–2014 cohorts who did and did not take CB-year ITE. Error bars represent 95% CI. **P < 0.01 and ***P < 0.001 for pairwise comparisons. CA = clinical anesthesia year; CB = clinical base year; ITE = In-Training Examination.

Close modal

The main finding of this study is that the first cohort of anesthesiology residents in the staged examination system (i.e., the 2013 cohort) demonstrated accelerated knowledge acquisition from the CA-1 year to the CA-2 year, in fact achieving the performance of CA-3 residents of the last cohort in the traditional system by their CA-2 year.

ITEs are commonly employed to assess the knowledge of residency trainees. A number of studies show that ITE scores are significantly correlated with performance on subsequent written certification examinations in anesthesiology,9,12  internal medicine,5,8  pediatrics,2  family practice,7  surgery,4  orthopedics,6  and oral and maxillofacial surgery.3  In particular, performance on the ABA ITE administered in the CA-1 year is a significant and moderately strong predictor of performance on the ABA written certification examination, and of success in becoming ABA board certified (i.e., passing both the written and oral examinations) in a timely fashion after completing residency.9  The ABA ITE is also taken by Canadian anesthesiology residency programs, and has proven to predict performance in the Royal College of Physicians and Surgeons of Canada certification examinations.12  We recently demonstrated that the risk of adverse actions against the medical licenses of anesthesiologists is lower in those who are board certified.13  Thus, efforts to improve the timely knowledge acquisition of anesthesiology residents, as assessed by ITE performance, may serve to improve their subsequent medical practice. Additionally, with certification now being time-limited, it is important to incorporate study habits into residency training that will be maintained throughout a career so that knowledge can remain current.

To this end, the ABA changed the written examination component of its primary certification process so that a separate examination of basic knowledge related to anesthesiology is now incorporated within the training period. Rationales included incentivizing active participation in learning activities from the onset of training and providing residency program directors with a robust assessment of resident knowledge at a sufficiently early stage of training that intervention would be possible if the performance was poor. In addition, residency programs might adjust their curricula to better help their residents attain the requisite knowledge. Our findings provide evidence that, compared with previous cohorts in the traditional examination system, the first cohort participating in the staged examination system did substantially improve their rate of knowledge acquisition from their CA-1 to CA-2 years.

Given the timing of the BASIC Examination (administered near the end of the CA-1 year in June/July), we anticipated that its introduction would primarily affect the change in ITE scores from the CA-1 to CA-2 years as residents prepared for the BASIC Examination (later in the CA-1 year, likely after the ITE administration in the CA-1 year). However, there was some evidence that it was also associated with change from the CB to CA-1 years, albeit only for the second cohort to enter the staged system (i.e., the 2014 cohort). Although the greater change from CB to CA-1 years (by 3.2 points) seen for the 2014 cohort may be partly related to their lower starting level (by 1.2 points) in the CB year, the higher level that they reached in the CA-1 year suggests an additional change in learning trajectory during this period. This may be indicative of preparation for the BASIC Examination before the ITE administration in the CA-1 year (early in the CA-1 year or even later in the CB year), resulting in an improved CA-1 ITE performance. It appears that since the introduction of the staged examination system, taking the ITE in the CB year is associated with improved ITE performance in the CA-1 year. This could reflect an increased appreciation for the importance of examination preparation among residents who are exposed to the ITE at an earlier stage of training, or an increase in emphasis on education in anesthesia-related topics in the CB year. Alternatively, it may be related to other characteristics of those anesthesiology residencies that offer the ITE during the CB year, or the characteristics of residents enrolling in such programs.

The analysis included two covariates known on the basis of previous work to affect performance on written certification examinations. Sex differences in performance were present at both the CB and CA-1 levels. These are consistent with previous findings that men performed better than women as demonstrated for ABA written examinations including the ITE,9  the ABA Part 1 (written) Examination,9  and the Maintenance of Certification in Anesthesiology Program examination.11  The origin of these differences is not known. Nonetheless, annual changes in scores (i.e., slope) did not differ between men and women. The performance of IMGs was better than that of AMGs at both CB and CA-1 levels, which may reflect the extensive process that IMGs must follow to enter residency training programs in the United States,14  and the time and effort that they have devoted to study to gain acceptance into a residency. Their slope over both time periods studied was significantly lower compared with AMGs, which may be because of difficulties in adapting to a new culture and a new healthcare environment, or may simply reflect their better performance at baseline.

This analysis has several limitations. Of perhaps greatest importance, changes in ITE performance among cohorts are interpreted as being related to the transition to the staged examination system, but could be related to other factors. For example, the introduction of the BASIC Examination occurred at the same time as the introduction of milestone-based anesthesiology resident evaluations. Both could drive changes of training programs’ curricula, focusing teaching on the accomplishment of milestones and the content outline that the ITE is based upon. Thus, although the BASIC Examination could change residents’ study habits and serve as a useful tool to identify low-performing residents for necessary intervention earlier in their training, we cannot exclude a contribution to the results from other changes occurring in a similar time frame. In addition, the method used to equate examination scores across administration years assumes that AMG CA-3 performance remains stable, such that the scores at lower levels of training (CB, CA-1, and CA-2) must be interpreted relative to CA-3 resident performance. Thus it is not possible to compare the knowledge achieved near the end of training among the cohorts. In addition, it is not possible to extend this analysis to future years, because the 2016 ITE administration includes the first staged examination CA-3 resident cohort, who may have had a greater fund of knowledge than previous cohorts, and the assumption of equivalent groups is not likely to hold. Thus, our conclusions are based on a limited dataset. While the ABA started providing percent correct scores for BASIC and ADVANCED items of ITE in 2014, it is not valid to compare these percent correct scores across years because percent correct scores were not equated and the item difficulty levels were not taken into consideration. Still, the analysis of the transition from CB to CA-1 years is potentially confounded by differences between residents who do and do not have the opportunity to take the ITE in their CB year, although any such differences were likely stable over the period of study. Finally, future research will be required to determine if the accelerated trajectory of ITE performance translates to greater knowledge and/or enhanced clinical performance at the completion of training.

In conclusion, this study supports the hypothesis that the introduction of the BASIC Examination is associated with accelerated knowledge acquisition in residency training, and provides evidence for the value of the new staged examination system in promoting desired educational outcomes of anesthesiology training.

The authors thank Alex Macario, M.D., M.B.A. (Stanford University, Stanford, California; Director of the American Board of Anesthesiology), Mohammed M. Minhaj, M.D., M.B.A. (University of Chicago, Chicago, Illinois), and Andrew J. Patterson, M.D., Ph.D. (University of Nebraska Medical Center, Omaha, Nebraska; Director of the American Board of Anesthesiology), for their comments on an earlier draft of this work.

Support was provided solely from institutional and/or departmental sources.

Drs. Harman, Sun, Wang, and Zhou are staff members of the American Board of Anesthesiology (ABA); Drs. Keegan and Warner are ABA Directors and receive a stipend for their participation in ABA activities; Dr. Lien is a former ABA Director.

1.
The American Board of Anesthesiology In-Training Examination blueprint
.
Available at: http://www.theaba.org/PDFs/ITE-Exam/ITE-Exam-Blueprint. Accessed May 10, 2017
2.
Althouse
LA
,
McGuinness
GA
:
The in-training examination: An analysis of its predictive value on performance on the general pediatrics certification examination.
J Pediatr
2008
;
153
:
425
8
3.
Ellis
E
III
,
Haug
RH
:
A comparison of performance on the OMSITE and ABOMS written qualifying examination. Oral and Maxillofacial Surgery In-Training Examination. American Board of Oral and Maxillofacial Surgery.
J Oral Maxillofac Surg
2000
;
58
:
1401
6
4.
Garvin
PJ
,
Kaminski
DL
:
Significance of the in-training examination in a surgical residency program.
Surgery
1984
;
96
:
109
13
5.
Grossman
RS
,
Fincher
RM
,
Layne
RD
,
Seelig
CB
,
Berkowitz
LR
,
Levine
MA
:
Validity of the in-training examination for predicting American Board of Internal Medicine certifying examination scores.
J Gen Intern Med
1992
;
7
:
63
7
6.
Klein
GR
,
Austin
MS
,
Randolph
S
,
Sharkey
PF
,
Hilibrand
AS
:
Passing the Boards: Can USMLE and Orthopaedic in-Training Examination scores predict passage of the ABOS Part-I examination?
J Bone Joint Surg Am
2004
;
86
:
1092
5
7.
Leigh
TM
,
Johnson
TP
,
Pisacano
NJ
:
Predictive validity of the American Board of Family Practice In-Training Examination.
Acad Med
1990
;
65
:
454
7
8.
Waxman
H
,
Braunstein
G
,
Dantzker
D
,
Goldberg
S
,
Lefrak
S
,
Lichstein
E
,
Ratzan
K
,
Schiffman
F
:
Performance on the internal medicine second-year residency in-training examination predicts the outcome of the ABIM certifying examination.
J Gen Intern Med
1994
;
9
:
692
4
9.
McClintock
JC
,
Gravlee
GP
:
Predicting success on the certification examinations of the American Board of Anesthesiology.
Anesthesiology
2010
;
112
:
212
9
10.
Kolen
MJ
,
Brennan
RL
:
Test equating, scaling, and linking: Methods and practices
.
New York
,
Springer Science & Business Media
,
2014
11.
Sun
H
,
Culley
DJ
,
Lien
CA
,
Kitchener
DL
,
Harman
AE
,
Warner
DO
:
Predictors of performance on the Maintenance of Certification in Anesthesiology Program® (MOCA®) examination.
J Clin Anesth
2015
;
27
:
1
6
12.
Kearney
RA
,
Sullivan
P
,
Skakun
E
:
Performance on ABA-ASA in-training examination predicts success for RCPSC certification. American Board of Anesthesiology-American Society of Anesthesiologists. Royal College of Physicians and Surgeons of Canada.
Can J Anaesth
2000
;
47
:
914
8
13.
Zhou
Y
,
Sun
H
,
Culley
DJ
,
Young
A
,
Harman
AE
,
Warner
DO
:
Effectiveness of written and oral specialty certification examinations to predict actions against the medical licenses of anesthesiologists.
Anesthesiology
2017
;
126
:
1171
9
14.
Whelan
GP
,
Gary
NE
,
Kostis
J
,
Boulet
JR
,
Hallock
JA
:
The changing pool of international medical graduates seeking certification training in US graduate medical education programs.
JAMA
2002
;
288
:
1079
84