WHEN is a resident ready to be independent in the intraoperative setting? Should residents learn critical event management and other basic skills systematically through simulation before being supervised at anything less than one-to-one? Does assessment of skills enhance learning so well that it is equally or more important as a learning tool than as a test? Will there come a time when simulation in its various forms will supplant some or much of the current ways of training residents? The article by Park et al. 1in this issue of Anesthesiology does not ask or answer these questions. But, the questions do arise from considering the results of their well-designed, executed, and interesting study of a simulation-based program for training anesthesia residents during their 6 weeks of entry into the program. As important, the flaws in their design illustrate how even the best of studies of simulation or perhaps any educational method will always be imperfect. We are left to depend on judgment mixed with a little hard evidence to make critical decisions on many questions about the effectiveness of educational methods and assessments.
Residencies must guarantee that resident work hours are not excessive and at the same time assure that trainees progress through residency meeting measurable specialty milestones. Although this is a laudable goal, the ever-increasing expectations for specialty training programs challenge the resources of residencies. The article by Park et al. applies a learning strategy and a simulation-based training approach that accomplishes some of these goals.
The training imperative that Park et al. confront is the challenge of ensuring that neophyte residents rapidly acquire essential skills. A related challenge is determining when a resident is ready to progress to the next level of responsibility and, more practically, from one-on-one supervision. Park et al. aimed primarily at the first challenge, yet, in doing so, also gave us an illustration of how to address the second as well. They developed two intensive, 12-h programs for training and assessment for the response to hypoxic and hypotensive critical events. The design of the assessment program was notable for two reasons. First, the input of faculty experts by a Delphi approach led to the selection of three scenarios for hypoxia and three scenarios for hypotension. The experts developed a common set of tasks for hypoxia and hypotension as well as a specific set of tasks for each of the six conditions. This set of common tasks is intended to enable residents to develop an essential emergency diagnostic and treatment framework to use as the “first line” in the approach to hypoxia (increase Fio2, auscultate, and others) or hypotension (verify blood pressure, reduce anesthetic agents, and others). The second notable contribution of the study was that the raters scoring the scenarios were blinded to the study arm and had not worked with any resident they rated. This is an exemplar of systematic training that is unusual in residency programs and objective evaluation that few, if any, employ. The residents were randomized to experience one or the other training first, were evaluated 3 weeks later, and then experienced the other set of scenarios as well. Testing another 3 weeks later examined skill retention. The results were fairly definitive: residents learn to manage these events and the learning sticks at least for several weeks. Some behavioral skills are learned in each type of training; more event-specific skills are learned only in the event-specific training.
The flaw in the design is that we do not know what would have happened if the residents had simply learned only in the traditional way for a bit longer. Might they have improved anyway? With such small resident cohorts and with residents expecting to have this kind of training if it is available, it is difficult to withhold it. It will thus be difficult if not impossible to conduct the kinds of large-scale trials needed for definitive evidence, but it is worth trying. It is also worth noting that two studies in the early years of anesthesia simulation, while underpowered, yielded similar results.2A study by Good et al. , in 1992, was unable to continue because residents insisted that all receive the basic training through simulation rather than being randomized (Michael Good, MD, e-mail communication, August, 2009).
Residents are not generally expected to manage emergencies before their transition to administer anesthesia without essentially constant faculty supervision. Yet, it is reasonable to expect that a resident should have basic skills to recognize an impending crisis and initiate at least the basic appropriate actions. We learn from this study that, not surprisingly, after an internship and the initial weeks of anesthesia training in the traditional clinical setting, residents do not have the skills to respond to a hypoxic or hypotensive event. With training of the sort used by Park et al. , they clearly get better, at least in a simulated clinical situation. The obvious question is how can residents be permitted to be alone without assurance of some basic competencies? The means are now available. Added to the apparent value of assessment for learning itself, the argument becomes even stronger.
The mastery training model that Park et al. have used is based on an approach to the acquisition and retention of skill that suggest that both are enhanced by repeated testing.3,4The traditional approach of educators is to use objectively measured assessment only to measure the progress of skill acquisition rather than as a direct means of training. Yet, there is mounting evidence that assessment itself is an effective method to acquire skill and expertise. If this is the case, then perhaps our training strategies need to shift even further away from the didactic lecture method and toward approaches that more actively engage trainees in the learning process. If the goal is to ensure the acquisition and retention of skill, perhaps our approach should be repeated, frequent assessment with associated feedback.
Although one of the possible criticisms of this study is that the residents were simply trained to take the test, this approach may be the most effective method to ensure skill acquisition and retention. Using an inventory of scenarios covering a broad range of simulated clinical events, the potential to accelerate skill acquisition and retention is a possibility. This approach would require the development of a more broad consensus and transparent performance expectations for patient care. The potential outcome might be elevated performance standards in anesthesia practice and a safer patient care environment.
Physicians entering residency are reaching the zenith of their educational odyssey and primarily acquire knowledge and skill by active involvement. While frequent assessment might be considered a more intimidating approach to learning, if the result were rapid acquisition and retention of training milestones and ultimately a shorter residency, most would welcome the change. As the authors note, it may in fact be more cost effective to provide this training approach particularly if the result is accelerated acquisition and retention of skill. The challenge is to develop and pilot a learning model that can be implemented with little additional expense and that meets the demands to increase the number of skills residents are expected to acquire in a shorter training period.
Park et al. added another building block in what now seems to be an inexorable march toward systematic, validated, objective training, and assessment through simulation in anesthesia and perhaps all of healthcare education. The sooner we can implement these techniques more broadly the better off everyone will be, especially the patients.
*Department of Anesthesia, Critical Care, and Pain Medicine, Massachusetts General Hospital, Boston, Massachusetts. jcooper@partners.org. †Department of Anesthesiology, Washington University School of Medicine, St Louis, Missouri.