Abstract
Established models for assessment and maintenance of competency in anesthesiology may not be adequate for anesthesiologists wishing to reenter practice. The authors describe a program developed in their institution incorporating simulator-based education, to help determine competency in licensed and previously licensed anesthesiologists before return to practice.
The authors have used simulation for assessment and retraining at their institution since 2002. Physicians evaluated by the authors’ center undergo an adaptable 2-day simulation-based assessment conducted by two board-certified anesthesiologists. A minimum of three cases are presented on each day, with specific core competencies assessed, and participants complete a standard Clinical Anesthesia Year 3 level anesthesia knowledge test. Participants are debriefed extensively and retraining regimens are designed, where indicated, consisting of a combination of simulation and operating-room observership.
Twenty anesthesiologists were referred to the authors’ institution between 2002 and 2012. Fourteen participants (70%) were in active clinical practice 1 yr after participation in the authors’ program, five (25%) were in supervised positions, and nine (45%) had resumed independent clinical practice. The reasons of participants not in practice were personal (1 participant) and medico-legal (3 participants); two participants were lost to follow-up. Two of 14 physicians, who were formally assessed in the authors’ program, were deemed likely unfit for safe return to practice, irrespective of further training. These physicians were unavailable for contact 1 yr after assessment.
Anesthesiologists seeking to return to active clinical status are a heterogeneous group. The simulated environment provides an effective means by which to assess baseline competency and also a way to retrain physicians.
Methods to assess anesthesiologists for reentry into practice after a prolonged absence have received little attention, especially methods utilizing simulation
Over a 10-yr period, 20 anesthesiologists referred to one institution after hiatus from practice underwent a simulation-based reentry program
The group was heterogeneous, and simulation aided the assessors in making recommendations
THE acquisition and maintenance of competency in anesthesiology are essential for safe and independent practice. Although the current model for certification and assessment of competence through the American Board of Anesthesiology examination system,1 the Maintenance of Certification in Anesthesiology programs, and participation in continuing medical education may be adequate for most,2–4 not all anesthesiologists can be adequately assessed or educated using these systems. Those seeking to reenter a general practice of clinical anesthesia often have significant voluntary (e.g., early retirement, change of career) or involuntary (e.g., disability) hiatuses from clinical practice, limited scopes of practice (e.g., exclusive pain or sedation practices), or medico-legal problems (e.g., suspended licensure due to incompetence, impairment, or financial improprieties). As such, anesthesiologists may need training5 and assessment, which is not adequately provided by available programs before reentry into clinical practice. In addition, these traditional channels may be insufficient as a demonstration of competence to licensing bodies or employers who may look critically upon such candidates.
The American Medical Association Council on Medical Education published a report in 2008 highlighting the need for physician reentry programs and providing principles for their design.‖ They recommended that a physician absent from practice for 2 or more years, irrespective of the reason, should participate in a formal reentry program capable of assessing essential clinical competencies and tailoring remediation to his or her specific needs. At present, several programs have arisen to address this issue in the medical community at large.6–9 Unfortunately, few programs exist to meet the unique needs of the returning anesthesiologist. Many of the skills important to anesthesiologists (e.g., vigilance and crisis management) are vulnerable to deterioration10 and may be difficult to reacquire or thoroughly assess with available observership and/or standardized patient encounters.# Although these methods may facilitate acquisition or assessment of some skills useful to the general practitioner or the anesthesiologist who works solely in a preoperative clinic (e.g., communication, interpersonal skills), these are but a small part of the competencies needed by the practicing anesthesiologist. In addition, it is unclear whether programs that enroll physicians who left practice voluntarily are broadly applicable to physicians seeking relicensure or those presenting with medico-legal barriers to clinical reentry.
We have conducted an anesthesiology reentry program consisting of multimodality simulation-based assessment and retraining for more than 10 yr and have reported our experience in the past.5,11–14 To our knowledge, our program is unique in its extensive use of simulation, particularly manikin-based simulation, as its central component for both retraining and assessment of anesthesiologists, especially for licensure purposes. Herein, we report our experience with the first 20 participants in our program and discuss the feasibility and outcomes of simulation-based programs for anesthesiologist reentry.
Materials and Methods
This study was granted exemption from human subjects protection and written informed consent by the Institutional Review Board at the Icahn School of Medicine at Mount Sinai (New York, New York) before reporting of data.
The Programs
Assessment and retraining programs are conducted at The Mount Sinai Human Education, Emulation, and Evaluation Lab for Patient Safety and Professional Study Center. This simulation-based educational facility is located in the Department of Anesthesiology at the Icahn School of Medicine at Mount Sinai. Physicians are remanded or self-referred and present with heterogeneous needs before clinical reentry. The application and referral process typically begins with an initial contact with the department and then the program director. Although the Federation of State Medical Boards lists our program in their postlicensure assessment directory, we do not actively solicit or advertise the program, and most referrals (especially in the last 5 yr) come directly from the New York State Office of Professional Medical Conduct. After the initial contact, the director, along with the assistant director, conducts a telephonic or face-to-face interview with the participant, personal legal counsel, and/or licensing body seeking to enroll the participant, as applicable. Then they determine as to whether the program can satisfy the needs of the participant or referring entity. Program fees are determined by, and dependent upon, the services requested and rendered. The program’s fee schedule was established to cover the expenses (staffing, administrative expenses, equipment) with a modest profit, which is used toward simulator warranty fees,13 and is paid directly to the Department of Anesthesiology at the Icahn School of Medicine at Mount Sinai. The enrollee is generally responsible for the payment, although on occasion a referring body has accepted payment responsibilities.
There are three possible pathways in our reentry program, depending on the specific needs of the participant: (1) assessment only, (2) retraining only, or (3) both assessment and retraining in various combinations (i.e., retraining then assessment, or assessment then retraining, or assessment then retraining then reassessment). The assessment-only pathway, as described in detail below, is generally reserved for those participants who are referred to our program by state medical boards or institutions after an adverse event. The retrain-only pathway is for participants whose deficits have already been identified (by self or by others) and consists exclusively of simulation-based retraining in conjunction with observership in our live operating rooms at The Mount Sinai Medical Center. Participants in this pathway are sometimes referred by outside entities, although in general, those seeking the retraining-only track are voluntary participants, who desire to refresh their skills before returning to practice or expand their scope of practice. The combined pathway, the most common, is a comprehensive track consisting consecutively of formal assessment and a retraining program, which is influenced not only by the details of the referral, but also by the results of our assessment. Occasionally, and with increasing frequency, a reassessment is conducted at the conclusion of the retraining portion of the combined program to demonstrate improved performance in areas of identified deficits.
Figure 1 shows the development of an individualized program. The process involves an initial needs assessment, a feasibility assessment, and finally an offer to the referring body regarding the pricing, length, and characteristics of the program based on the individual specifics of the case. Self-referring physicians have input into their program development, whereas those physicians remanded to participate do not. If requested, a remediation prescription is formulated and provided, otherwise, we report only objective performance data, as observed during the simulation-based assessment (i.e., unless asked, we do not provide a de facto statement of competence, nor do we provide suggestions as to how competence might be attained).
The Assessment Program
Each participant receives a written set of assessment instructions before attending, which includes a detailed description of the assessment format, the core–competency-based criteria used for evaluation, and a detailed schedule of events. The participant also receives a contract outlining program and participant responsibilities, an agreement to have all simulation sessions recorded digitally (audio and video), and all applicable waivers of liability, designed by the legal departments of the Mount Sinai Medical Center and the Icahn School of Medicine at Mount Sinai. All documents are read, signed, and returned before the assessment. Enrollees remanded to participate by outside institutions or licensing bodies are also required to sign waivers granting permission to release written evaluations and video recordings of their sessions to the referring entities.
A 2-day assessment is used. This is preceded by a half-day introductory orientation course. The 4-h orientation allows participants to meet the faculty and to get accustomed to the simulated environment. Assessment instructions are reviewed verbally with the participants, who learn the strengths and limitations of the technology (patient simulators as well as the anesthesia equipment), and are taught how to interface with the simulator. They are also instructed as to the “ground rules” of the program (e.g., to interact with the simulator as one would to a real patient, use any and all equipment available, and take nothing for granted as a “glitch” in the technology). Each participant conducts at least one brief general anesthetic, in which no clinical problems occur (to help build comfort and familiarity with the environment), and is provided the opportunity to do more cases, if desired. During the orientation, participants are also allowed to briefly use other simulators they may be using, such as a virtual fiberoptic bronchoscope, a neuraxial anesthesia part-task trainer, a central-line part-task trainer, and an IV arm part-task trainer.
The 2-day evaluation is conducted in the presence of two board-certified anesthesiologist raters with extensive experience in simulation-based assessment (see Rater training below), who observe and evaluate each case in real time, and who also view and evaluate video recordings of the day’s events immediately on conclusion of the assessment. Two other support staff members serve as confederates, to play-act various roles during the scenario (e.g., nurse, surgeon, family member), and to provide additional feedback to the evaluators about the participant’s performance. A standardized patient actor plays the role of the patient for each preoperative visit before the participants’ interactions with the manikin. An additional member of the staff is always available to assist with technical issues (e.g., unfamiliarity with monitor or anesthesia machine, difficulties with the simulator’s drug recognition system, etc.). Three to four simulation-based patient encounters are presented each day.
Although a unique assessment is ultimately developed for each participant, essential topics evaluated over the 2 days always include: (1) preoperative assessment and optimization; (2) anesthetic plan and conduct including induction and emergence; (3) management of perioperative events, including hypoxia, hypercarbia, hypotension, hypertension, and arrhythmias; (4) postoperative care; (5) pain management; (6) demonstration of crisis management skills in the operating room and postanesthesia care unit; (7) working knowledge of and the ability to apply current American Heart Association Advanced Cardiovascular Life Support (ACLS) protocols; and (8) working knowledge of and the ability to perform the American Society of Anesthesiologists difficult-airway algorithm. Also tested and evaluated during each scenario are key core competencies of the Accreditation Council on Graduate Medical Education and American Board of Medical Specialties (i.e., patient care, medical knowledge, interpersonal and communication skills, professionalism, and systems-based practice).
Three scenarios are presented on day 1. These are standardized scenarios, which are presented unaltered (in terms of the chronology and major events) to all participants, and are intended to establish a baseline level of overall competency. The scenarios presented on day 2 are chosen and altered according to the participant’s performance on day 1. Procedures that were cancelled or delayed for medical optimization on day 1 are rescheduled and conducted on day 2. For example, a patient presenting for an elective case whose medical condition (e.g., coronary artery disease) was not optimized, and who was appropriately cancelled by the participant on day 1, will reappear on day 2 as an emergency case (e.g., the same patient presenting with a hip fracture from a fall). Specific deficits in knowledge, judgment, and skill identified on day 1 are confirmed with similar clinical situations on day 2. Failures to follow published practice guidelines or algorithms (e.g., American Society of Anesthesiologists difficult-airway or American Heart Association ACLS) on day 1 are reassessed on day 2 to ensure these were real deficiencies, and not due to hypervigilance, confusion, or to some other difficulty attributed to the simulated environment.
Rater Training
The simulation program director serves as one of the raters for every assessment performed. The director (Dr. Levine) has also served as an anesthesiology residency program director for the past 16 yr at the Icahn School of Medicine at Mount Sinai and has presided over three consecutive 5-yr Accreditation Council for Graduate Medical Education accreditations. Rater requirements for our program are stringent in light of the high-stakes nature of these assessments, and all potential raters are vetted and trained by the program director (table 1).
Each rater must have at least 5 yr of experience using simulation, to educate and assess resident physicians under the supervision of the program director and other simulation faculty. This requires having been trained either as a simulation fellow in a formal 1-yr simulation fellowship (as is the case for two of five of our faculty members), or having had a cumulative experience, in which 6 full months of dedicated educational time was devoted to simulation-based education and assessment for anesthesiology residents and medical students. In addition, new raters are required to rate resident performance (not used as a part of the residents’ official training record, but used for rater experience and development). This is accomplished through participation in the department’s year-long simulation curriculum for junior and senior residents.
Rating Tools
For each scenario, several technical and nontechnical skills are chosen a priori for focused evaluation (mostly informed by the areas of deficiency reported by the referring entity). Two rating tools, the Anesthetists’ Nontechnical Skills (ANTS) system and the University of Toronto technical rating scale, are used for the assessment. The ANTS system is used for each scenario to detail performance in four nontechnical domains: task management, teamwork, situational awareness, and decision-making.15,16 Each domain is given a score of 1–4, with 1 as poor and 4 as good performance. In an effort to improve ANTS system granularity, a half-point system was implemented, as described elsewhere.17 For assessment of technical skills in each scenario, the University of Toronto rating scale is used and has been described elsewhere.18,19 This is a 1–5 scale, where 1 indicates very poor and 5 indicates superior technical performance. Interrater reliability has varied over time in our program, depending on the scenarios used, but has generally been modest to very good for the ANTS system. To establish rater reliability on cases scored with the ANTS, intraclass correlation coefficients (two-way mixed, absolute agreement) were conducted for all the day 1 scenarios (scenarios 1–3), using data derived from 30 CA-1 residents and 30 Clinical Anesthesia Year 3 residents participating in these cases as part of departmental simulation exercises. We used 0.70 as a threshold for reliability of raters on each coded case, and calculated measures of reliability at the measure level, not the subscale level. The average ANTS intraclass correlation coefficients score for all cases using previous resident data were 0.72 with reliability coefficients ranging from 0.70 to 0.82. This interrater reliability data were determined during other simulation-based encounters (with residents), and not during the reentry encounters, although identical simulation-based cases were used as they appeared on day 1 of the assessment. Interrater reliability assessment of our group has not been determined for the Toronto system, as it has for the ANTS system; this is due to resource restraints, relative scale simplicity, and a decreased focus on psychomotor skills versus nontechnical skills.
A global rating scale is also used for each scenario overall, and for each core competency of the Accreditation Council for Graduate Medical Education on a 1–5 scale, where 1 implies poor and 5 implies excellent performance. The average global rating intraclass correlation coefficient score for all cases was 0.78, with reliability coefficients ranging from 0.66 to 0.84, when using the resident data. Qualitative performance assessment is provided by both raters in the form of narratives, which support global assessment results and summarize the findings in plain language, which can be understood by a referring body. After completion of their individual reports, the raters discuss the participant’s performance and together arrive at the final overall narrative summary for each scenario. They are not blinded to each other’s final reports, and in fact, they review one another’s findings to discuss any discrepancies, and to generate the final performance report for a given scenario as well as for the entire assessment. The overall standard by which participant performance is evaluated is related to senior level anesthesiology resident performance at our center. That is, the minimum requirement for our group to say a participant practiced “within the standard of care in the simulated environment” is for that participant to demonstrate knowledge and skills expected of a Clinical Anesthesia Year 3 resident. As each scenario used for the assessment has been used throughout the continuum of our residency program, we benchmark performance based on the typical scoring we find for CA 1, 2, and 3 level residents, respectively (internal data, not reported elsewhere).
Objective medical knowledge is assessed, using the Anesthesiology Knowledge Tests (Anesthesiology Knowledge Tests 6 and 24), which are administered after completion of the simulator sessions on days 1 and 2 of the assessment. Our program has access to the American Heart Association ACLS examinations (our staff includes certified ACLS instructors), and although participants are not offered ACLS recertification, the 50-question multiple-choice examination is administered, if requested by the referring body, and the results are used to determine and objectively document the participant’s knowledge of current ACLS protocols.
An actual assessment program schedule is presented as an example in table 2. This participant (participant Z) was referred to our program by his state licensing board, after a series of sentinel events, which necessitated an investigation of his vigilance. For each scenario, the basic stem has been provided, although specific details have been excluded to maintain participant confidentiality and also so that prospective enrollees in our program will not have advanced knowledge of our assessment scenarios. Although the scenarios described on day 1 of the assessment are typical of those used for all enrollees, the day-2 scenarios, as previously mentioned, are influenced by the results of day 1. In each scenario, certain competencies of concern to be investigated are determined a priori and are subject to the ANTS and Toronto technical scales. However, various other competencies that recur in each scenario (e.g., airway management, communication) are expected to be captured by the global assessment and narrative notes, to give a holistic picture of how a participant fared in a scenario and to reiterate these important competencies.
Assessment Reports
All assessment scenarios are recorded (video and audio recordings) and reviewed by the raters, who use the standardized rating scales mentioned above as well as narrative reports (qualitative data) to generate a final report. The final written report has three key components: a brief description of the assessment tools used (provided initially as an addendum to the requesting body); a detailed description of each case, including scenario goals, objectives, events, and assessment results; and a final list of identified deficiencies and/or a prescription for remediation (if requested by the referring entity). A final summary statement is also composed, documenting whether the participant “did or did not practice within the standards of care in the simulated environment” as observed by the raters. No statement as to the participant’s clinical competency is made, per se. When necessary and appropriate, recommendations for follow-up training and remediation (e.g., ACLS training and certification, participation in continuing medical education, simulation-based retraining, repeated clinical residency) are generated. However, as stated elsewhere, it is not universal that a referring body requests a formal recommendation for remediation, and in many cases, we provide only an objective report of our findings.
The Retraining Program
A unique retraining program is designed for each participant and varies by participant needs and the results of the initial assessment (if one was conducted). Frequently, retraining includes both simulation and operating-room observation. Each day consists of morning one-on-one simulation-based instruction for 2–4 h, followed by 3–4 h of operating-room observation during the afternoon session. The length of the retraining program varies from 1 to 6 weeks. During each week of study, participants receive on average 15 h of individualized simulation (consisting of 10–15 simulated scenarios per week) and 15–20 h of operating-room observation. The retraining curriculum includes the following essential topics irrespective of retraining program length or basis of referral: anesthesia induction and emergence, hypoxia, hypotension, hypertension, arrhythmias, and crisis management. Specific topics are emphasized as deemed appropriate (e.g., fiberoptic skills, crisis management, professionalism, and communication). If a physician was referred for medico-legal problems revolving around a specific issue, or if deficiencies were identified during the simulation-based assessment, the curriculum is designed to emphasize these topics. Scenario repetition is used throughout the curriculum to focus on specific topics of concern. Airway management, for example, is revisited frequently throughout the curriculum (e.g., an unanticipated difficult airway at the start of a scenario that was ultimately designed to reinforce the management of hypotension). This allows the staff to not only reinforce targeted competencies, but to repeatedly evaluate participants with respect to practice-based learning and improvement.
For those participants who were self-referred, we recently added self-evaluation exercises as a part of our program, to strengthen the emphasis on improvement as well as a way to gauge the participants’ insight into their particular deficits. Participants are asked to furnish the program director with informal daily reports regarding the cases they encountered, their opinion on their own performance, and areas of improvement they have identified. These are corroborated with ongoing qualitative evaluations of performance by the simulation facilitators.
Follow-up
Former participants were contacted via regular and electronic mail for responses as to their current status of employment and licensure. Additionally, an online search was conducted to determine if any public action had been taken against any participants during the time since their completion of the program.
Results
The study group characteristics and the details of the assessment and retraining programs are outlined in tables 3–4. All participants were referred between 2000 and 2011 and were from six different states in the United States. Sixteen of the programs occurred in the past 5 yr.
Fifteen of the 20 program participants (75%) responded to a follow-up survey 1 yr after completing their simulation-based reentry program (fig. 2). Eleven of these respondents (73%) had successfully returned to unrestricted practice at that time (determined by their own report or by the report of their current employer). Four of the 15 respondents were not in practice as of 1 yr after our program. The reasons given were “personal” for one participant, and medico-legal for the remaining three participants, who were all still actively seeking reentry. To date, all of the 11 practicing respondents were doing so without public action against their licenses (range of practice times between 1 and 6 yr postreentry program).
In total, 14 of the 20 participants were formally assessed. Of these, only five participants came with mandates from referring bodies that requested a formal recommendation for remediation in their initial agreement with our center. Three more participants requested such recommendations in subsequent communications after the formal assessment was completed. For two of the assessed physicians, the opinion of the evaluating faculty was that their deficits were significant enough to preclude likely improvement, even with extensive retraining (at our center or elsewhere). One of these participants continues to practice, but his license has been restricted by the state licensing board, disallowing him to resume the full practice of anesthesiology (i.e., he can only perform medical histories and physicals for insurance purposes). As a matter of fact, this participant disputed the validity of our assessment. However, his state medical board ruled in support of our report at a subsequent hearing, saying that the simulation-based assessment corroborated the findings of their investigation and other tools of assessment they used (e.g., psychological assessment and medical knowledge tests not used in our program). The other participant’s license was completely revoked by a state medical board; that participant did not respond to our follow-up survey.
Twelve of the 14 participants who were formally assessed were described as demonstrating the ability to practice “within the standards of care in the simulated environment” during the majority of their assessed scenarios. However, there were deficits identified, which were deemed amenable to and in need of remediation (e.g., ACLS skills). One of these participants was prevented from returning to practice by a legal action, despite supportive results of his assessment. Of the five participants who did not return the follow-up survey, three were found to be in active current practice via an internet search.
Discussion
The American Medical Association recommends a means of assessing clinical competency and also recommends tailor-made remediation for physicians seeking reentry into clinical practice. Simulation-based training and assessment is increasingly used for certification and licensure,2,3,14,20 and in our program, it has proven a viable option for anesthesiologists seeking reentry into clinical practice. In this report, 73% of respondents had returned to successful clinical practice when surveyed 1 yr after participation in our primarily manikin-based reentry program. When including nonrespondents, 14 of the 20 participants (70%) were found to be in active clinical practice when surveyed at 1 yr. Only two physicians were found to have such profound deficits that they were not considered candidates for retraining or remediation by our program (i.e., we deemed them likely unfit to safely return to practice, irrespective of further training). In both cases, the participants’ state medical boards corroborated our findings with data from their own investigations.
The assessment and maintenance of up-to-date clinical competency are important tasks for physician reentry. There are currently a number of established programs to accomplish this, though they are by no means uniform.** The Center for Personalized Education for Physicians (Denver, Colorado) is a program focusing on physicians of any specialty who voluntarily left practice and are free of licensing board discipline. Participants found to demonstrate educational needs warranting structured reeducation are discharged to long-term remediation programs with a physician mentor.†† Similarly, the Physician Assessment and Clinical Education (University of California, San Diego) provides services for physicians of all specialties who are perceived to be having problems with clinical care or who are referred to the program by the California medical board. It consists of a 2-day assessment followed by a didactic program and a week of clinical observation.‡‡ The Advanced Specialty Training Program (University of California, Los Angeles) focused on upgrading skills of anesthesiologists who had been out of practice, including those under disciplinary action. This program incorporated some simulator technology (1 to 2 half-day training sessions, emphasizing crisis management) and was structured like a “mini residency,” in that participants managed live patients under supervision.8 Unfortunately, the Advanced Specialty Training Program is no longer active at the time of this writing.
Structured reentry programs specifically for anesthesiologists are in short supply. The ideal assessment program should be efficient, reliable, reproducible, and would evaluate the core competencies of an anesthesiologist as delineated by the American Board of Anesthesiology. The process should ideally be standardized to ensure consistency between participants, yet adaptable enough to thoroughly address an individual’s suspected areas of weakness. Furthermore, the process should be flexible enough to verify findings that emerge during the evaluation process itself. The program must be equally effective in assessing participants with gross deficiencies and those with minor deficits, and must require the least possible burden of medical and legal risk for patients, assessors, and participants.
Live-patient interaction and care, while potentially the most high-fidelity model of the assessment models, presents distinct challenges. Granting clinical privileges requires current licensure and hospital appointments. This is an often insurmountable obstacle for physicians whose license is suspended or in jeopardy, and it presents a logistical challenge for those who simply do not possess a license in the state of the retraining program (indeed, we have enrolled physicians from five states outside of our own). In addition, the efficacy of live-patient models for assessment and retraining must rely on chance that the full scope of crises and conditions necessary for complete evaluation of the participant’s skillset will arise, when in fact, the incidence of these conditions may be low and their occurrence, unpredictable.
The use of evaluator-directed simulation obviates these issues, and in the case of an adverse outcome, the “patient” can simply be reset and no physical or legal harm is accrued. High-fidelity patient simulation satisfies many of the American Medical Association requirements for reentry programs. Using the simulated environment, we have been able to measure an essential skillset patterned after the core clinical competencies of anesthesiologists. A major component of this skillset is the ability to problem-solve in sometimes chaotic scenarios under time pressure.18,21,22 The flexibility of the simulated environment allows for thorough characterization of a participant’s unique areas of weakness, whether determined a priori, or encountered during an assessment. Standardized cases may elicit general information regarding a participant’s fund of knowledge and skills;10,23,24 however, physicians with legal problems may require intensive assessment, for which there are few standardized rating tools. For these reasons, we have developed a hybrid model of assessment, incorporating standardized and nonstandardized rating tools and scenarios along with open, narrative, and global evaluations. This approach has allowed us to provide rich evaluation and to tailor our reentry programs to the participants’ needs.
The impact of these data must be tempered by several limitations. Our center’s experience performing simulation-based assessment and retraining may not be reproducible elsewhere, given the costly resources required and the faculty or departmental commitment to this endeavor. Not all departments may have the resources or the interest in comprehensive physician assessment and retraining, nor may they be willing to assume legal responsibility for supporting or disproving claims of competency. There certainly exists the potential for legal risk to programs engaging in these programs, but we have not as yet experienced any difficulties. Our legal department has developed a document releasing our facility from liability for issues the participant may have in future practice, inasmuch as we do not provide evidence of clinical competence, but rather a description of their performance in the simulated clinical environment.13
Another potential weakness of this work is the incomplete uniformity and standardization of our assessment and retraining modality between participants. However, we believe that the assessment of anesthesiologists (especially those under legal action) requires global qualitative assessment in addition to the standardized assessments that we do incorporate. This is especially important when a state medical board mandates specific areas to be assessed, as is often the case. Few validated checklists exist to assess many, if not most of these skills to the degree that suits their purposes (e.g., in the case of participant Z, where six of seven cases presented had at least one element testing vigilance). The measures we have created (largely toward the preference of referring bodies) adapt validated measures such as the ANTS scale as much as possible. Although these “hybrid” measures are quite useful for our purposes, we realize many in simulation rely on published checklists for specific skills (e.g., airway management) and therefore, we cite this fact as a weakness. Also, as our interrater data were derived from a different cohort of subjects (residents), it is difficult to say whether the rater data we reported herein apply to this particular study cohort (retrainees), although the raters and cases rated were identical.
We define “successful return to practice” as a participant practicing with an unrestricted license 1 yr after completion of our program. We do not imply that these physicians are immune to future difficulties because should an adverse event ultimately occur, their reentry may, at that point, be considered a failure. In addition, the small number of participants, and the lack of replies from 5 of 20 participants (although only two were completely lost to follow-up) limit the statistical interpretation of our results.
Simulation-based assessment and retraining is a viable means by which anesthesiologists may safely reenter the workforce. Given the charge by society and governing bodies to make medicine safer, it is likely that a growing number of anesthesiologists will be the focus of institutional and state investigations. Additionally, the economic downturn and physician shortages may mean that more anesthesiologists will seek reentry,25 and those with prolonged clinical hiatuses may face rigorous scrutiny before receiving institutional privileges. Because the American Society of Anesthesiologists and American Board of Anesthesiology have yet to establish standardized programs for reentry, we have attempted to create one with some success, at least in the cohort of our initial 20 participants. We hope that other programs with similar resources may be inspired to develop and conduct their own simulation-based reentry programs and that, as more centers work in collaboration, more standardized cases and assessment tools will be developed that can be shared amongst centers and broadly applied. It is unlikely that complete standardization will be achievable, or even desirable, given the challenge of assessing judgment, vigilance, and other “higher-level” skills of practicing anesthesiologists because program participants will continue to be diverse in their needs. Still, the use of simulation-based assessment and retraining is a promising and innovative tool that deserves further exploration.
American Medical Association: Report 6 of the Council on Medical Education (A-08). Physician reentry Chicago; 2008. Available at: http://www.ama-assn.org/ama1/pub/upload/mm/377/cmerpt_6a-08.pdf. Accessed April 7, 2012.
Federation of State Medical Boards, Post-Licensure Assessment System (PLAS). Available at: www.fsmb.org/pdf/RemEdProg.pdf. Accessed April 10, 2012.
Federation of State Medical Boards, Post-Licensure Assessment System (PLAS). Available at: www.fsmb.org/pdf/RemEdProg.pdf. Accessed April 10, 2012.
American Medical Association: Report 6 of the Council on Medical Education (A-08). Physician reentry Chicago; 2008. Available at: http://www.ama-assn.org/ama1/pub/upload/mm/377/cmerpt_6a-08.pdf. Accessed April 7, 2012.
University of California, San Diego, Physician Assessment and Clinical Education Program. Available at: www.paceprogram.ucsd.edu. Accessed April 8, 2012.