Background

Early acquisition of critical competencies by novice anesthesiology residents is essential for patient safety, but traditional training methods may be insufficient. The purpose of this study was to determine the effectiveness of high-fidelity simulation training of novice residents in the initial management of critical intraoperative events.

Methods

Twenty-one novice residents participated in this 6-week study. Three hypoxemia and three hypotension scenarios were developed and corresponding checklists were validated. Residents were tested in all scenarios at baseline (0 weeks) and divided into two groups, using a randomized crossover study design. Group 1 received simulation-based training in hypoxemic events, whereas Group 2 was trained in hypotensive events. After intermediate (3 weeks) testing in all scenarios, the groups switched to receive training in the other critical event. Final testing occurred at 6 weeks. Raters blinded to subject identity, group assignment, and test date scored videotaped performances by using checklists. The primary outcome measure was composite scores for hypoxemia and hypotension scenarios, which were compared within and between groups.

Results

Baseline performance between groups was similar. At the intermediate evaluation, the mean hypoxemia score was higher in Group 1 compared with Group 2 (65.5% vs. 52.4%, 95% CI of difference 6.3-19.9, P < 0.003). Conversely, Group 2 had a higher mean hypotension score (67.4% vs. 45.5%, 95% CI of difference 14.6-29.2, P < 0.003). At Week 6, the scores between groups did not differ.

Conclusions

Event-specific, simulation-based training resulted in superior performance in scenarios compared with traditional training and simulation-based training in an alternate event.

  • ❖ Novice anesthesiology residents take time to learn essential clinical skills

  • ❖ Practice with high-fidelity simulation improves correct handling of simulated clinical situations

  • ❖ Over 6 weeks, novice anesthesiology residents improved their handling of simulated hypoxemia and hypotension

  • ❖ This performance was accelerated with high-fidelity simulation of these specific events, although whether this translates to better clinical care is not known

THE introduction to clinical anesthesiology training presents unique challenges. The novice resident must rapidly assimilate cognitive and technical skills necessary to competently respond to critical events. However, the current operating room-based model of training may result in significant gaps in early trainee preparation especially for infrequent critical events. In addition, the infrequency of these events creates a barrier to clinical performance assessment. Perioperative critical events remain a leading cause of adverse patient outcomes,1and the development of a standardized, specific training curriculum for the novice anesthesiology resident in the management of these events would be valuable. Deliberate practice in the safe and controlled environment of high-fidelity patient simulation may be one method to compensate for gaps in trainee experience and offers the potential for not only effective training but also a more direct observation of learner performance for competency evaluation.

Performance assessment of anesthesiology residents by using high-fidelity patient simulation has become increasingly widespread,2,3and reliable and valid measures of performance can be obtained in the setting of simulation.4The goal of this study was to train novice anesthesiology residents by using a simulation-based curriculum and to evaluate performance in the management of acute intraoperative hypoxemia and hypotension. Using a prospective, randomized crossover study design, we tested the hypothesis that the addition of event-specific simulation-based training to traditional methods (patient care, lectures, nonspecific simulation training, and independent study) would result in accelerated acquisition of management skills for that event.

The study, using a randomized, prospective crossover design, was approved by the Institutional Review Board of Northwestern University (Chicago, Illinois), and written informed consent was obtained from resident participants before enrollment. The study was conducted in the Northwestern Memorial Hospital Patient Safety Simulation Center, using the life-size Human Patient Simulator (HPS®; Medical Education Technologies, Inc., Sarasota, FL). The primary outcome measure was the checklist scores of performance in simulated scenarios obtained during three evaluation sessions: baseline, intermediate (3 weeks), and final (6 weeks).

Scenario and Checklist Development

Three hypoxemia scenarios and three hypotension scenarios were developed for a total of six study events. Hypoxemia scenarios included bronchospasm, endobronchial intubation, and breathing circuit leak, and hypotension scenarios included hypovolemia, medication error, and myocardial ischemia. A library of patient profiles and corresponding anesthesia records was created, which represented a variety of potential etiologies for the development of either critical event. Different combinations of scenarios and patient profiles were used during the training and evaluation sessions to prevent early recognition or anticipation of the specific study event. All scenarios began with the participant assuming care of a patient receiving general endotracheal anesthesia and lasted approximately 6 min. Initial vital signs were blood pressure of 110/60 mmHg, heart rate of 80 beats/min in normal sinus rhythm, and pulse oximetry reading of 97%. In the hypotension scenarios, the blood pressure decreased by at least 30% of the baseline value while oxygen saturation was maintained. In the hypoxemia scenarios, the oxygen saturation decreased to 75–85% while the blood pressure was maintained. To promote and evaluate interventions beyond the initial response and development of differential diagnoses, the physiologic derangement did not correct despite initial maneuvers with the exception of an immediate correction in the breathing circuit leak scenario.

A list of desired responses and behaviors for the identification and management of the critical event within each clinical scenario was developed for use as an assessment tool (table 1, appendices 1–6). Six items pertaining to communication and obtaining assistance were common to all scenarios. Items were listed in one of four categories: initial response, discover etiology, management, and secondary survey, and items did not represent a requisite sequential order for action.

Table 1.  Scenarios and Checklist Tasks

Table 1.  Scenarios and Checklist Tasks
Table 1.  Scenarios and Checklist Tasks

A modified Delphi approach5was used to develop and validate the assessment checklists. Five internal experts rated the importance of inclusion of the task from one to five (not important to very important). Tasks that did not achieve a median rating of more than or equal to three were removed from the checklist. The content validity index of the final scoring system was determined by calculating the percentage of total items rated by the experts as either four or five. The checklists were then evaluated by five extra-mural anesthesiologists practicing in academic institutions in various geographic locations within the United States. Tasks in the final checklists required a median rating more than or equal to three for inclusion. Checklists were assessed for internal consistency and interrater variability by evaluating the performances of 10 residents in all 6 scenarios at 1 evaluation session. Six raters independently participated in post hoc  videotape viewing and scoring of these residents' performances. Internal consistency was assessed using the Cronbach's alpha test. Interrater reliability was measured using intraclass correlation as the absolute agreement of the mean of all ratings. The number of evaluators required to achieve 95% agreement in scoring the checklists was determined by factor analysis by using a principal component analysis extraction method.

Procedure

Anesthesiology residents during their first 6 weeks of anesthesiology training after an internship year were eligible to participate in the study. The exclusion criterion was previous postgraduate training in anesthesiology. Over the 6-week period, the residents received operating room training with supervision at a 1:1 faculty to resident ratio and attended daily 1-h didactic lectures as well as the sessions in the simulator center. These sessions occurred during routine work hours.

Participants were randomly divided into two groups using, a computer-generated random numbers table. All residents were familiarized with the simulator environment and mannequin as a routine part of their orientation, which incorporated instruction in machine checkout and induction of general anesthesia. At the baseline evaluation session before the educational intervention, participants underwent individual testing in the six study scenarios presented in a random order. Participants were instructed to verbalize their observations, thoughts, and actions. No debriefing took place between scenarios. Evaluation sessions were videotaped for post hoc  evaluation of performance.

After the baseline evaluation session, residents received simulation-based instruction addressing the three study events in their assigned group (fig. 1). Residents attended four 3-h group instruction sessions over a 3-week period, which comprised one session per scenario and one review session. Training sessions were conducted by instructors experienced in simulation-based education, with standardized instruction based on a predeveloped curriculum. Learning objectives included correct identification and verification of the problem, initial interventions, development of differential diagnoses, and obtaining assistance. These objectives reflect the content of the checklist assessments, but the participants were not shown the checklists. The instructor provided coaching during the training sessions, and the designated scenario was repeated until each resident participated at least once as the principal anesthesiologist and as a first responder to the request for assistance. Each simulation was followed by a detailed debriefing incorporating group discussion and instructor feedback regarding relevant pathophysiology as well as correct, incorrect, and missing actions.

Fig. 1. Study design of parallel groups with crossover after the intermediate testing session.

Fig. 1. Study design of parallel groups with crossover after the intermediate testing session.

Close modal

An intermediate evaluation was performed after 3weeks of instruction. Testing was conducted in the same manner as the baseline evaluation. Groups were then crossed over to receive instruction in the other critical event in Weeks 4–6. A final evaluation was conducted after Week 6, on completion of instruction in both critical events.

Statistical Analysis

The number of participants was a convenience sample determined by the number of incoming resident trainees (n = 21) to the Department of Anesthesiology. The sample of 21 achieves 93% power to detect superiority using a one-sided t  test when the accepted margin of equivalence is 10% and the estimated difference between the groups is 20% at [alpha ] of 0.05, using a 2 × 2 crossover design with an equal number in each sequence. The SD of the difference between the means is 7% (PASS 2008: NCSS; Kaysville, UT).

Using the validated checklists, two raters scored performance by a videotape review of each evaluation session. Tasks were rated as either complete or incomplete. Task completion rate in the simulated scenarios was the primary outcome measure. Raters were board-certified or board-eligible anesthesiologists within the Northwestern University Department of Anesthesiology. Raters were blinded to subject identity and to test date and had not been instructors for the participants they evaluated. Composite performance scores for the three hypoxemia and three hypotension scenarios were determined by averaging the scores for each resident.

Data were analyzed in three subsets: all tasks, scenario-specific tasks, and common tasks. Data were compared among evaluation sessions between groups by using analysis of variance for repeated measures, with previous self-reported clinical exposure to the study events as a covariate in the model. Between sessions, comparisons were determined as the difference from the baseline to the subsequent evaluation. Post hoc  comparisons were made using Bonferroni corrected t  tests. Group characteristics, including gender, internship type, and self-reported experience with study events, were compared between groups using the Fisher exact test. All reported P  values are two sided. A P  value < 0.05 was required to reject the null hypothesis. Statistical analyses were performed using SPSS ver 16.0.2 (Chicago, IL).

There was high agreement regarding the importance of the items contained on the checklists for both the internal and external evaluators (table 2). Content validity ranged from 70 to 94%. Interrater reliability of six raters observing the performance of 10 residents in the scenarios ranged from 91 to 97%. Factor analysis suggested that two raters explained 95% of the variance of the six test raters.

Table 2.  Assessment of Scoring Checklists and Rater Performance

Table 2.  Assessment of Scoring Checklists and Rater Performance
Table 2.  Assessment of Scoring Checklists and Rater Performance

Twenty-one residents consented to participate in the study. No resident refused participation, and data from 21 residents were analyzed. There was neither difference in gender or internship type between groups nor was there a difference in the number of residents in each group with experience in managing the study events (table 3).

Table 3.  Group Characteristics

Table 3.  Group Characteristics
Table 3.  Group Characteristics

Baseline assessment of the participants demonstrated no difference in individual scenario performance scores within groups (fig. 2). For both groups, scores were higher for all scenarios at the intermediate evaluation. At the final evaluation, scores in scenarios related to the second training event were higher than those at the intermediate evaluation session (fig. 2). Scores in scenarios related to the first training event were similar at the intermediate and final evaluation sessions.

Fig. 2. Scores by individual scenario at baseline, intermediate (3 weeks), and final (6 weeks) testing sessions. (A ) Hypoxemia training first; (B ) hypotension training first.

Fig. 2. Scores by individual scenario at baseline, intermediate (3 weeks), and final (6 weeks) testing sessions. (A ) Hypoxemia training first; (B ) hypotension training first.

Close modal

Composite performance in hypoxemic and hypotensive critical events is shown in table 4. At baseline, there was no difference in composite performance between critical events (hypoxemia: 28 ± 8%; hypotension: 28 ± 7%, P = 0.76) between groups. At intermediate testing, event-specific scores were higher for the group trained in that event during the first training period compared with the other group. Within the groups, event-specific scores were higher for the trained event compared with the alternate event, but tasks common to all scenarios did not differ between or within groups. At the final evaluation, both groups scored similarly in both event-specific and common tasks. There was no difference in performance based on gender or type of internship experience at the baseline or final evaluation for either critical event. Self-reported experience with management of hypoxemia did not impact performance at the baseline or final evaluation; in contrast, residents reporting previous management of hypotension scored better at the baseline and final evaluations compared with residents reporting no prior exposure.

Table 4.  Composite Scores

Table 4.  Composite Scores
Table 4.  Composite Scores

The chief finding of this study is that simulation-based, event-specific training of novice anesthesiology residents in the initial management of critical intraoperative events leads to accelerated acquisition of event-specific skills compared with a group whose exposure to the critical event was dependent on traditional training methods. Both groups improved and retained performance in tasks related to communication regardless of group assignment. Weinger et al.  6noted, in a task analysis of the first 3 weeks of anesthesia training, that initial clinical training seems to emphasize manual tasks, while neglecting other tasks such as conversing, observing the patient, or vigilance. Additional factors impacting experience during the initial training period include a nonuniform case mix among trainees as well as an unpredictable incidence of critical events. Early intervention or preemption by supervising attending physicians, while clearly in the best interest of patient safety, may further reduce trainee experience with management of critical events.

Simulation-based curricula offer well-described advantages as an educational tool7,8for both the acquisition of technical skills9and nontechnical skills10relevant to the management of critical events. In contrast to the unpredictability of clinical events, simulation-based training is customized to meet the specific needs of the learner in an on-demand learning environment without the potential to compromise patient care. Simulation-based training provides controlled, deliberate practice, which is a critical ingredient for learning that is frequently absent in other forms of teaching,11and has been shown to enable training to mastery.12Adult learning models suggest that experiential training results in greater learning and retention compared with didactic teaching.13,14Therefore, we established a training strategy with explicit learning objectives relative to the management of two critical intraoperative events, provided conditions of experiential learning and repetitive practice, and evaluated achievement based on these objectives. The effect of “teaching to the test” on performance by using this approach is an intended one. In an analogy of learning to drive an automobile, a learner is trained, for example, on the procedure for passing another vehicle, with variations depending on traffic conditions. Demonstration of the trained behaviors, including specific and general tasks, on the subsequent test is a desired result.

In addition to its educational value, using simulation as an assessment tool provides valuable information regarding educational gaps and areas of needed practice.15Furthermore, a recent study has demonstrated correlation between observed performance in simulated scenarios and clinical performance.16Simulation-based performance assessment has been studied across the spectrum of experience.17–19Assessments of novice anesthesiology residents, using simulation, have focused on skills that are likely to occur in the presence of an attending anesthesiologist,17have investigated novice residents beyond the fully-supervised training period,20or have compared novice performance with the performance of advanced residents in simulated scenarios.18We elected to perform a comparative analysis of early novice anesthesiology residents' performance to each other after simulation-based, event-specific training. We focused on two types of critical intraoperative events, acute hypoxemia and acute hypotension. The observed difference in performance at the time of intermediate evaluation suggests that the traditional model of initial training does not provide sufficient experience in the management of these events. Because deliberate practice has been shown to be crucial in achieving competence,11simulation-based training may offer an effective, reliable approach to provide the requisite increased experience. Furthermore, early acquisition of competence in the initial management of these events may contribute to novice anesthesiology resident preparedness for semi-independent function.

Significantly improved performance was observed at the time of intermediate evaluation, even if event-specific training had not yet occurred. This may reflect the concurrent contribution of traditional training methods on learning or the effect of repeated testing in the same subjects. Another possible explanation for this observation was that items pertaining to communication were taught to both groups. When these six common items were excluded from the analysis, we noted a more pronounced effect of event-specific training. Nevertheless, a significant improvement from baseline persisted in the untrained event.

Self-reported experience in the management of hypoxemia did not correlate with superior performance at either the baseline or final evaluations. In contrast, experience in the management of hypotension did impact performance; those residents who reported such experience out performed those who did not at the baseline and final evaluations. Management of hypoxemia under conditions of general endotracheal anesthesia requires skills that most interns are unlikely to possess, whereas skills related to the management of hypotension outside the operating room environment may also be applied to patients under general anesthesia.

Simulation seems to result in retention of skills beyond the initial training period, as suggested by simulation-based training of managing difficult intubation scenarios21and in simulation-based Advanced Cardiac Life Support training, in which retention of skills up to 14 months has been observed.22In the current study, while occurring over a short observation period, the improvement in performance observed from baseline to intermediate evaluations was maintained at the final evaluation. Conclusions regarding skill acquisition earlier than 3 weeks and skill retention beyond 6 weeks are beyond the scope of this study. Although concurrent clinical experience was inferior to specific training in the acquisition of specific skills at the intermediate evaluation, it is unclear to what degree concurrent clinical experience may have contributed to maintenance of skills at the final evaluation.

There are several limitations to our study. In our institution, provision of simulation-based training is the standard during initial clinical training. This influenced our choice of experimental design, and, as such, we did not have a control group that received no simulation exposure and cannot comment on the comparative efficacy of other teaching interventions. All residents were exposed to all scenarios at each testing session, and although the patient profiles and the order of scenarios were varied, the possible effect of test-enhanced learning is unknown. All residents received 12 h of training for each critical event; therefore, we cannot determine the dose-response correlation of quantity of instruction required to achieve or surpass the observed improvement in scores. An additional limitation was that some checklist items were specific actions, whereas others were more general, and although training sessions did not teach exact responses to every potential relevant task or show the subjects the checklist, it is difficult to measure the effect of general versus  scenario-specific learning. This may be reflected in the improved performance at the midtest for both events. However, the scores in the elements common to all scenarios increased proportionally, suggesting that common tasks acquisition did not depend on the nature of the training. Quantitative measurement of more rapid independence and potential cost or time savings are beyond the scope of this study, although one could infer that the higher performance after training reflects the potential of earlier transition to semi-independence and reduced time of individual supervision. A single anesthesiologist with technical assistance was able to instruct groups of five to six trainees at a session, which may be more cost-effective than supervision at a 1:1 faculty to resident ratio in the operating room. Finally, the priority of patient safety during critical events occurring in the operating room, and the frequency with which they occur, make it difficult to correlate simulated and clinical performance.

Although future studies are needed to achieve benchmarks for passing scores and to evaluate additional methods of competency assessment, our study highlights the value of simulation-based training and assessment for the novice trainee.

The authors thank the following for their assistance: Leonard D. Wade, M.S., Assistant Professor of Anesthesiology; Rozanna Chester, M.S., Simulator Education Assistant; Mark D. Burno, M.D., Instructor in Anesthesiology; Alexander M. DeLeon, M.D., Assistant Professor in Anesthesiology; R-Jay L. Marcus, M.D., Assistant Professor in Anesthesiology; Meghan C. Tadel, M.D., Instructor in Anesthesiology; and Cynthia A. Wong M.D., Professor in Anesthesiology, all from the Department of Anesthesiology, Northwestern University Feinberg School of Medicine, Chicago, Illinois; William C. McGaghie, Ph.D., Professor of Medical Education, and Augusta Webster, M.D., Office of Medical Education, Northwestern University Feinberg School of Medicine; and the Searle Center for Teaching Excellence, Northwestern University, Evanston, Illinois.

1.
Murray DJ, Boulet JR, Avidan M, Kras JF, Henrichs B, Woodhouse J, Evers AS: Performance of residents and anesthesiologists in a simulation-based skill assessment. Anesthesiology 2007; 107:705–13
2.
Murray DJ, Boulet JR, Kras JF, McAllister JD, Cox TE: A simulation-based acute skills performance assessment for anesthesia training. Anesth Analg 2005; 101:1127–34
3.
Murray DJ, Boulet JR, Kras JF, Woodhouse JA, Cox T, McAllister JD: Acute care skills in anesthesia practice: A simulation-based resident performance assessment. Anesthesiology 2004; 101:1084–95
4.
Murray D, Boulet J, Ziv A, Woodhouse J, Kras J, McAllister J: An acute care skills evaluation for graduating medical students: A pilot study using clinical simulation. Med Educ 2002; 36:833–41
5.
Scavone BM, Sproviero MT, McCarthy RJ, Wong CA, Sullivan JT, Siddall VJ, Wade LD: Development of an objective scoring system for measurement of resident performance on the human patient simulator. Anesthesiology 2006; 105:260–6
6.
Weinger MB, Herndon OW, Zornow MH, Paulus MP, Gaba DM, Dallen LT: An objective methodology for task analysis and workload assessment in anesthesia providers. Anesthesiology 1994; 80:77–92
7.
Kneebone RL, Scott W, Darzi A, Horrocks M: Simulation and clinical practice: Strengthening the relationship. Med Educ 2004; 38:1095–102
8.
Issenberg SB, McGaghie WC, Petrusa ER, Lee Gordon D, Scalese RJ: Features and uses of high-fidelity medical simulations that lead to effective learning: A BEME systematic review. Med Teach 2005; 27:10–28
9.
Van Sickle KR, Ritter EM, Smith CD: The pretrained novice: Using simulation-based training to improve learning in the operating room. Surg Innov 2006; 13:198–204
10.
Yee B, Naik VN, Joo HS, Savoldelli GL, Chung DY, Houston PL, Karatzoglou BJ, Hamstra SJ: Nontechnical skills in anesthesia crisis management with repeated exposure to simulation-based education. Anesthesiology 2005; 103: 241–8
11.
McGaghie WC, Issenberg SB, Petrusa ER, Scalese RJ: Effect of practice on standardised learning outcomes in simulation-based medical education. Med Educ 2006; 40:792–7
12.
Wayne DB, Butter J, Siddall VJ, Fudala MJ, Wade LD, Feinglass J, McGaghie WC: Mastery learning of advanced cardiac life support skills by internal medicine residents using simulation technology and deliberate practice. J Gen Intern Med 2006; 21:251–6
13.
Davis D, O'Brien MA, Freemantle N, Wolf FM, Mazmanian P, Taylor-Vaisey A: Impact of formal continuing medical education: Do conferences, workshops, rounds, and other traditional continuing education activities change physician behavior or health care outcomes? JAMA 1999; 282:867–74
14.
Kolb DA: Experiential Learning: Experience as the Source of Learning and Development. Englewood Cliffs, NJ, Prentice-Hall, 1984
Englewood Cliffs, NJ
,
Prentice-Hall
15.
Morgan PJ, Cleave-Hogg D, DeSousa S, Tarshis J: Identification of gaps in the achievement of undergraduate anesthesia educational objectives using high-fidelity patient simulation. Anesth Analg 2003; 97:1690–4
16.
Wayne DB, Didwania A, Feinglass J, Fudala MJ, Barsuk JH, McGaghie WC: Simulation-based education improves quality of care during cardiac arrest team responses at an academic teaching hospital: A case-control study. Chest 2008; 133:56–61
17.
Forrest FC, Taylor MA, Postlethwaite K, Aspinall R: Use of a high-fidelity simulator to develop testing of the technical performance of novice anaesthetists. Br J Anaesth 2002; 88:338–44
18.
Schwid HA, Rooke GA, Carline J, Steadman RH, Murray WB, Olympio M, Tarver S, Steckner K, Wetstone S: Evaluation of anesthesia residents using mannequin-based simulation: A multiinstitutional study. Anesthesiology 2002; 97:1434–44
19.
Weller JM, Bloch M, Young S, Maze M, Oyesola S, Wyner J, Dob D, Haire K, Durbridge J, Walker T, Newble D: Evaluation of high fidelity patient simulator in assessment of performance of anaesthetists. Br J Anaesth 2003; 90:43–7
20.
Johnson KB, Syroid ND, Drews FA, Ogden LL, Strayer DL, Pace NL, Tyler DL, White JL, Westenskow DR: Part Task and variable priority training in first-year anesthesia resident education: A combined didactic and simulation-based approach to improve management of adverse airway and respiratory events. Anesthesiology 2008; 108:831–40
21.
Kuduvalli PM, Parker CJ, Leuwer M, Guha A: Retention and transferability of team resource management skills in anaesthetic emergencies: The long-term impact of a high-fidelity simulation-based course. Eur J Anaesthesiol 2009; 26:17–22
22.
Wayne DB, Siddall VJ, Butter J, Fudala MJ, Wade LD, Feinglass J, McGaghie WC: A longitudinal study of internal medicine residents' retention of advanced cardiac life support skills. Acad Med 2006; 81:S9–12

Table. Appendix 1.  Checklist for Bronchospasm Scenario

Table. Appendix 1.  Checklist for Bronchospasm Scenario
Table. Appendix 1.  Checklist for Bronchospasm Scenario

Table. Appendix 2.  Checklist for Endobronchial Intubation Scenario

Table. Appendix 2.  Checklist for Endobronchial Intubation Scenario
Table. Appendix 2.  Checklist for Endobronchial Intubation Scenario

Table. Appendix 3.  Checklist for Circuit Leak Scenario

Table. Appendix 3.  Checklist for Circuit Leak Scenario
Table. Appendix 3.  Checklist for Circuit Leak Scenario

Table. Appendix 4.  Checklist for Hypovolemia Scenario

Table. Appendix 4.  Checklist for Hypovolemia Scenario
Table. Appendix 4.  Checklist for Hypovolemia Scenario

Table. Appendix 5.  Checklist for Medication Error Scenario

Table. Appendix 5.  Checklist for Medication Error Scenario
Table. Appendix 5.  Checklist for Medication Error Scenario

Table. Appendix 6.  Checklist for Myocardial Ischemia Scenario

Table. Appendix 6.  Checklist for Myocardial Ischemia Scenario
Table. Appendix 6.  Checklist for Myocardial Ischemia Scenario