Background

The availability of simulator technology at the University of Toronto (Toronto, Ontario, Canada) provided the opportunity to compare the efficacy of video-assisted and simulator-assisted learning.

Methods

After ethics approval from the University of Toronto, all final-year medical students were invited to participate in the current randomized trial comparing video-based to simulator-based education using three scenarios. After an introduction to the simulator environment, a 5-min performance-based pretest was administered in the simulator operating room requiring management of a critical event. A posttest was administered after students had participated in either a faculty-facilitated video or simulator teaching session. Standardized 12-point checklist performance protocols were used for assessment purposes. As well, students answered focused questions related to the educational sessions on a final examination. Student opinions regarding the value of the teaching sessions were obtained.

Results

One hundred forty-four medical students participated in the study (scenario 1, n = 43; scenario 2, n = 48; scenario 3, n = 53). There was a significant improvement in posttest scores over pretest scores in all scenarios. There was no statistically significant difference in scores between simulator or video teaching methods. There were no differences in final examination marks when the two educational methods were compared. Student opinions indicated that the experiential simulator sessions were more enjoyable and valuable than the video teaching sessions.

Conclusions

Both simulator and video types of faculty-facilitated education offer a valuable learning experience. Future work is needed that addresses the long-term effects of experiential learning in the retention of knowledge and acquired skills.

TECHNOLOGICAL advances offer educators new ways to improve, enhance, and stimulate learning for medical students, residents, and faculty. However, before widespread implementation of new technology, its value as an educational method must be explored and its usefulness over existing methods identified. The use of an anesthesia simulator is associated with significant implications with respect to resources and finances that must be balanced with educational outcomes.

In the past, the ability to repeatedly practice skills and exercise judgment in a hands-on manner was severely limited by patient safety issues. Medical students, particularly, were often relegated to the role of “observer” should a critical event arise and require management. However, experiential learning including constructive feedback and error correction is crucial in the development of expertise in medical practice. 1 

The simulator offers the ability to provide experiential learning in a risk-free, realistic environment with events that can be repeated and videotaped for valuable feedback. Intuitively, it would seem that this learning environment would be superior to video-assisted teaching. Therefore, the purpose of the current study was to compare the outcomes on both performance-based assessments and written examination questions between students being given simulator-based or videotape teaching.

After ethics board approval from the University of Toronto (Toronto, Ontario, Canada), final-year medical students at the University of Toronto (n = 144) participated in the current study. Students were informed that performance assessments generated from the study would not be used in their final evaluation. Written, informed consent was obtained from each student.

At the University of Toronto, the anesthesia rotation is scheduled in three 2-week blocks during a 6-week rotation. Three scenarios were developed to optimize confidentiality of case content between blocks of students. The educational content of the scenarios was based on course curriculum objectives. For each scenario a simulator performance pretest and posttest were created. The pretest and posttest topics included the recognition and management of the following: scenario 1, myocardial ischemia; scenario 2, anaphylaxis; and scenario 3, hypoxemia.

After an orientation to the simulation center, a 5-min performance-based pretest using a full-patient, high-fidelity simulator was administered to all students. On completion of the pretest, students were randomly allocated into two groups, a simulator group and a video group, by means of computer-generated number assignments (fig. 1). Each group was given a 1.5-h educational session facilitated by faculty or an anesthesia resident. Three hours later, a posttest related to this educational session was given to all students. During these intervening 3 h, all students had a lunch break and then participated in a 1.5-h educational session unrelated to the pretest or posttest. Students who had had their morning education session in the simulator spent the additional 1.5-h session in a video-facilitated environment, whereas students spending the morning in a video session participated in a simulator environment in the afternoon (fig. 1). These afternoon educational sessions involved scenarios and learning objectives that did not overlap with those addressed in the morning sessions.

Fig. 1. Study design.

Scenarios

Three scenarios were scripted for use in the anesthesia simulator and video sessions. Each scenario had five learning objectives focusing on (1) preoperative assessment, (2) preparation of equipment, (3) induction of anesthesia, (4) critical event 1, and (5) critical event 2. A printed handout sheet of information containing the pertinent history, physical examination, and laboratory findings was developed for each case.

The scenarios were designed with predetermined sequences of appropriate events. These events involved situations dealing with some of the objectives of the anesthesia rotation as outlined in the Anesthesia Clerkship Manual. Students were expected to perform (in the simulator) or observe (from the video) clinical skills such as bag and mask ventilation, laryngoscopy, and tracheal intubation and to appropriately assess the position of the endotracheal tube. Most important, the students were expected to make medical judgments based on information provided to them and the outcome of their actions. Attending faculty or a final-year resident guided the students through the scenario.

Pretests and Posttests

The pretests and posttests for each scenario were identical in content and were based on student hands-on management of a critical event discussed in the educational session. Before beginning, students were informed that their performance would be graded according to their verbalization of ongoing events and their observed actions. They were also informed that faculty would not probe students for explanations or provide management suggestions.

Each test lasted approximately 5 min and required the student to address three pertinent items related to the critical event including a statement of the problem, a differential diagnosis, and a management plan. A fourth item, either drug intervention or cause identification, completed the checklist form ( Appendix). The checklist enabled the observing faculty member to grade the student on four sections with a maximum of 3 points per section for a potential total score of 12 points.

Simulator Session

Eighteen groups of students attended the session every other week during their 2-week anesthesia rotation. Approximately 10 students attended per day, with 5 in each session at any given time.

The simulator session consisted of a preprogrammed scenario supervised by a faculty member or senior anesthesia resident. On the day of the simulation session, students were given an orientation regarding the simulation center in general and, specifically, the limitations of the anesthesia mannequin. This was followed by an orientation to the operating room setting, equipment, and monitors. They were shown the location of necessary anesthesia and resuscitation equipment and drugs. Questions were answered, and the students were given time to familiarize themselves with the environment.

During the 1.5-h educational session, students worked through the scenario guided by faculty, who allowed the scenario to be paused to facilitate discussion and management strategies. Some students were actively involved in managing the event while others observed. However, each student had the opportunity to manage one aspect of the scenario. Faculty ensured that important learning issues were addressed by stimulating reflection on pertinent points. Students were allowed open discussion and could direct their management of critical events on advice from their fellow students. Faculty did not guide student management of the cases, nor did they demonstrate correct maneuvers. However they ensured that any incorrect intervention was explored and discussed after the fact.

Video Session

The video presented a faculty member appropriately managing a simulator scenario. The videotape was designed to be paused at appropriate intervals to allow discussion and feedback as outlined in the simulator session. A faculty member or senior anesthesia resident facilitated the video session according to the workshop methodology. Each session lasted approximately 1.5 h.

Written Examination

At the end of each 6-week rotation a final written examination was administered. A distinct examination was developed for each rotation comprising 10 short-answer questions based on the learning objectives outlined in the course curriculum. A standardized marker's guide developed by the undergraduate education committee at the University of Toronto was used, with each question having a total potential score of 10. Final examinations were marked by two experienced faculty. One or two questions related to the simulator or video educational sessions were included among the 10 short-answer questions for each of the six examinations. These questions were not taken directly from the pretest or posttest but were related to general concepts related to the scenario topic.

Faculty Workshop

Faculty and residents involved in the study attended a workshop facilitated by a medical educator to familiarize themselves with the purpose of the study and the learning objectives to be covered in the educational sessions. Tutors were advised to use the Socratic teaching method, emphasizing questioning rather than lecturing, and to identify appropriate responses to anticipated questions from students. Faculty were advised that, for purposes of the pretest and posttest, students would be asked to manage a critical event in the simulation center. The role of faculty during the pretest and posttest was twofold: (1) to hand over the patient's care to the student as one would hand over care to a colleague and (2) to grade the student's performance using a standardized checklist. Faculty were advised to remind students to verbalize their thoughts related to the problem, diagnosis, and management and to remind them that faculty would not ask or provide answers to questions or guide students through the critical event.

Student Evaluation of Simulator and Video Sessions

All students were asked to complete a questionnaire related to their experience in the simulator and video sessions. Students were asked to rate both the video teaching session and the simulator teaching session in terms of their enjoyability and their value on a five-point Likert-type scale.

Statistics

Students were randomly assigned to either simulator or video groups by means of computer-generated, randomly selected numbered sealed envelopes. Significance was considered as a P  value less than 0.05 for all analyses described. Three separate analyses of performance-based outcomes were performed.

A repeated-measures, mixed-model analysis of variance (repeated-measures ANOVA) was used to compare marks in all students performing any scenario, using the pretest and posttest performance scores as the repeated measure in the first analysis. This analysis assessed the between-subjects factor (teaching modality) and the pretest and posttest scores as the within-subjects factor. The second analysis, again using repeated-measures ANOVA, compared two between-subjects factors (scenario performed and teaching modality) and the pretest and posttest scores as the within-subjects effect. The final group of four analyses using repeated-measures ANOVA involved analysis of results of each individual scenario as an isolated group.

Statistical analyses to assess effect of learning modality on written examination marks were performed using a univariate ANOVA with the examination mark for the question corresponding to the scenario used as the dependent variable. Two groups of students (simulator group and video group) were compared in the analyses as the between-subjects factor. Individual analyses were performed for each examination question (myocardial ischemia, anaphylaxis, and hypoxemia). The Scheffé test for post hoc  comparisons was used to assess paired comparisons between groups.

Written marks were also examined for differences in examination mark and time of teaching session because students wrote the examination at three different time intervals after the simulator and video teaching sessions. (time lags, 2, 16, and 30 days). A two-way ANOVA was performed using time lag and type of training as the two between-subjects factors. Individual analyses were again performed for each question (myocardial ischemia, anaphylaxis, and hypoxemia).

Descriptive statistics for the students’ evaluation of simulator and video experience were generated. Ratings of enjoyment for video and simulator sessions were compared using a paired t  test. A similar cross-method comparison for ratings of value of the session was also achieved through a paired t  test.

One hundred forty-four students participated in the study. When the first repeated-measures ANOVA was conducted, there was no statistically significant interaction effect of time of assessment by learning modality. That is, there was no difference in change on performance-based scores between the simulator group and the video group when all scenarios were considered together (F1,142= 1.099, P = 0.296). When scenario was included in the model as a second between-subjects factor, the difference between the pretest and posttest scores was significantly affected by which scenario was learned and tested, as demonstrated by a significant interaction term for time of assessment by scenario (F2,136= 34.07, P < 0.001). Also, as expected, students demonstrated a significant improvement from pretest to posttest scores, regardless of the scenario on which they were tested (F1,138= 252.4, P < 0.001). When mixed-model ANOVA was conducted on each scenario individually, there was no evidence of effect of training modality on improvement in test scores. The mean pretest and posttest scores for simulator and video groups undergoing each of the three scenarios are summarized in table 1, with corresponding F and P  values for each individual ANOVA.

Table 1. Pretest and Posttest Results for Simulator and Video Teaching in Three Scenarios (Mean and SD)

* Denotes significance of difference of pretest to posttest scores according to teaching modality used (simulator vs.  video).

Table 1. Pretest and Posttest Results for Simulator and Video Teaching in Three Scenarios (Mean and SD)
Table 1. Pretest and Posttest Results for Simulator and Video Teaching in Three Scenarios (Mean and SD)

Marks from focused written examination questions addressing the material taught during the simulator and video sessions were collected. In some cases, students answered more than one question because of the examination content. Results of students’ final examination marks on these focused questions are tabulated in table 2, and no statistically significant difference was noted. Also, there were no significant differences between scores on the examination questions (between video and simulator groups) when duration between the education session and the written examination was taken into account. The scores were tightly grouped around similar means.

Table 2. Mean and SD of Written Examination Marks on Focused Questions

Results reported include all written examination marks (combined results: 2, 16, or 30 days after simulator or video sessions).

Table 2. Mean and SD of Written Examination Marks on Focused Questions
Table 2. Mean and SD of Written Examination Marks on Focused Questions

Information gathered from student opinions about two statements (i.e. , “I enjoyed this method of learning” and “This was a valuable learning experience”) is given in table 3. Student opinions indicated that the simulator sessions were more enjoyable and valuable than the video teaching sessions (P < 0.001), although mean values for both were high.

Table 3. Comparison of Student Opinion of Simulator and Video Learning Experiences

Scale: 1–5: 1 = strongly disagree, 5 = strongly agree (mean ± SD).

Table 3. Comparison of Student Opinion of Simulator and Video Learning Experiences
Table 3. Comparison of Student Opinion of Simulator and Video Learning Experiences

Thirty-three of 177 final-year medical students did not attend the educational sessions. Some students did not attend because of a conflict with interviews for postgraduate training. Other students did not give reasons for nonattendance.

One of the strengths of simulator-assisted learning is the application of knowledge in a hands-on approach. Specifically, the simulator offers a venue for problem-solving in a real-life situation without patient risk or time constraints. Therefore, we chose to assess the outcomes of our educational methods by evaluating student performance during a hands-on simulated critical event. Results from a previous study indicated that a complex, multitask simulator scenario was somewhat challenging at the undergraduate level. 2For that reason, the performance template for the current study involved a single patient management problem only, giving the students the opportunity to focus their problem-solving abilities.

The improvement in posttest scores was not surprising and was an expected outcome after an educational session. The current study also demonstrated a significant difference between pretest and posttest scores depending on which scenario was learned and tested. Scenario 3 (hypoxemia) pretest scores were higher than scores for either scenario 1 or scenario 2. This result may be explained by a difference in test content, or students may have had more experience with this problem before the study.

Student performance posttest scores were not statistically different between the students learning in the simulator or the video session. A review of the literature yielded few studies comparing experiential (simulator-based) learning to either videotaped learning or traditional teaching methods. We were able to locate only one study using a full-patient, high-fidelity anesthesia simulator in which outcomes were compared between groups who had and had not been given prior simulator educational sessions. 3Chopra et al. , 3who evaluated the efficacy of simulator learning in an evaluation process 4 months after the educational session, demonstrated that anesthesia residents and faculty performed better in a simulated emergency case of malignant hyperthermia than did those who had not been given the training. The authors concluded that training on an anesthesia simulator does lead to improvement in the emergency management of anesthetic critical events. 3Similar to our study findings, Knudsen and Sisley 4noted no significant difference in posttest scores between residents who had learned ultrasound techniques on human models or real-life patients and those learning with an ultrasound simulator. The posttest was administered on the same day as the teaching session and was identical in content to the pretest. 4Other authors have compared different methods of instruction such as a simulator-based tool with traditional hands-on teaching. 5Taffinder et al.  5determined a significant improvement in laparoscopic skills using a MIST VR (version 1.2, using WorldToolKit, version 6, Virtual Presence Ltd., London, United Kingdom, and Microsoft Direct 3D, version 3, graphics libraries) laparoscopic simulator in trainee surgeons having completed a 1-day course using the simulator versus  the control group not being given the course. This difference was identified as an improvement in efficiency and a reduction in errors. This performance assessment occurred the day after the instructional session.

Two randomized controlled trials have examined the use of different teaching methods and performance assessments. 6,7In one study, 191 medical and physician assistant students were randomly assigned to four intervention groups to learn musculoskeletal examinations: written material only, written material and videotape, written materials and small-group sessions facilitated by fourth-year medical students, and all three methods. 7Students taught in small groups demonstrated significantly superior examination skills compared with the students taught with written material only. In another study, the effectiveness of two booster strategies designed to improve retention of skills and knowledge in neonatal resuscitation was compared. 6Residents were randomly assigned to one of three groups to be given either video, hands-on (mannequin), or no booster educational session. All participants completed a follow-up test 6–8 months later. A Neonatal Resuscitation Program (NRP) written test and a performance checklist were completed at that time. No differences were noted between groups on either written test scores or checklist performances.

In our original design we had intended to include a randomly assigned control group of students who would be given neither video nor simulator teaching. Because of ethical considerations from university faculty and faculty within our own department, a control group was not included. The concern was related to the ethics of not entitling all students to available and currently used teaching modalities in our department. We did not have the faculty resources or funding to provide a simulator or video session after completion of the study. Also, because of the brevity of the anesthesia rotation, students would not have been available to participate in an educational session at a later date. We acknowledge that the lack of a control group limited our study.

Our findings suggest that future simulator research should include the testing of long-term retention of knowledge related to experiential learning using a high-fidelity patient simulator. In addition, the potential differences in educational outcomes may be elucidated if the teaching modalities being tested are not too similar. As well, the outcome of performance assessments may be more robust if participants are tested on multiple cases rather than using a single critical event.

Simulation technology is gaining widespread acceptance in the medical profession in part because of its ability to demonstrate multiple patient problems, the reproducibility of content, safety of the environment, and the ease of simulating critical events. 8The anesthesia simulator has been shown to be an enjoyable and valuable educational tool for undergraduates and postgraduate trainees. 9–12However, practical issues related to acquisition and maintenance costs of anesthesia simulators, as well as availability of faculty resources, cannot be ignored. Whether the cost and resource implications of educational ventures using an anesthesia simulator in undergraduate education are justified remains controversial. The current study has demonstrated that both simulator-assisted and video-assisted small group teaching provided equivalent short-term outcomes in both same-day performance-based assessments and written examinations taken at a later date.

The authors thank the medical students of the University of Toronto (Toronto, Ontario, Canada) and the faculty of the Department of Anesthesia who supported the current study for their time and efforts.

Appendix: Pretest and Posttest

Scenario 1: Myocardial Ischemia

Patient is being given general anesthesia, blood pressure (BP) is 140/90 mmHg, heart rate (HR) is 88 beats/min. Trachea is intubated, patient is being given 70% N2O, 30% O2, and 0.8% Forane, and lungs are being ventilated at a rate of 10 tidal volume 650. After faculty has handed case to student, patient's BP begins to fall to 80–90 mmHg systolic, HR increases to 110 beats/min, and ST segments fall (myocardial ischemia).

TABLE,TABLE 

Table. No caption available.

Table. No caption available.
Table. No caption available.

Table. No caption available.

Table. No caption available.
Table. No caption available.
Scenario 2: Anaphylaxis

The patient is being given general anesthesia, trachea is intubated, and lungs are being ventilated with 70% N2O, 30% O2, and 1% Forane. Blood pressure is 130/70 mmHg, HR is 78 beats/min, tidal volume 600, rate 10 saturation and carbon dioxide are normal. Once faculty has handed case to student and told student that Ancef has been given, begin scenario immediately. Blood pressure falls to 80 mmHg systolic, tachycardia ensues at HR of 140 beats/min, airway pressure increases, bronchospasm occurs (anaphylactic sequence).

TABLE 

Table. No caption available.

Table. No caption available.
Table. No caption available.
Scenario 3: Hypoxemia

The patient is being given general anesthesia, the trachea is intubated, and the lungs are being ventilated with 70% N2O, 30% O2, and 1% Forane, oxygen saturation and CO2are normal. Tidal volume is 700 ml, rate is 10 per min. Once case is handed to student, oxygen saturation begins to decrease to 80–90% range. Blood pressure and HR are okay, CO2is low.

TABLE 

Table. NO captioan available.

Table. NO captioan available.
Table. NO captioan available.
1.
Ericsson KA, Krampe RT, Tesche-Rmer C: The role of deliberate practice in the acquisition of expert performance. Psychol Rev 1993; 100: 363–406
2.
Morgan PJ, Cleave-Hogg D, Guest CB, Herold J: Validity and reliability of undergraduate performance assessments in an anesthesia simulator. Can J Anesth 2001; 48: 225–33
3.
Chopra V, Gesink B, De Jong J, Bovill J, Spierdijk J, Brand R: Does training on an anaesthesia simulator lead to improvement in performance? Br J Anaesth 1994; 73: 293–7
4.
Knudsen MM, Sisley AC: Training residents using simulation technology: Experience with ultrasound for trauma. J Trauma 2000; 48: 659–65
5.
Taffinder N, Sutton C, Fishwick R, McManus I, Darzi A: Validation of virtual reality to teach and assess psychomotor skills in laparoscopic surgery: Results from randomised controlled studies using the MIST VR laparoscopic simulator. Stud Health Technol Informatics 1998; 50: 124–30
6.
Kaczorowski J, Levitt C, Hammond M, Outerbridge E, Grad R, Rothman A, Graves L: Retention of neonatal resuscitation skills and knowledge: A randomized controlled trial. Fam Med 1998; 30: 705–11
7.
Lawry GV, Schuldt SS, Kreiter CD, Densen P, Albanese MA: Teaching a screening musculoskeletal examination: A randomized, controlled trial of different instructional methods. Acad Med 1999; 74: 199–201
8.
Issenberg S, McGaghie W, Hart I, Mayer J, Felner J, Petrusa E, Waugh R, Brown D, Safford R, Gessner I, Gordon D, Ewy G: Simulation technology for health care professional skills training and assessment. JAMA 1999; 282: 861–6
9.
Morgan PJ, Cleave-Hogg D: A Canadian simulation experience: Faculty and student opinions of a performance evaluation study. Br J Anaesth 2000; 85: 779–81
10.
Fish MP, Flanagan B: Incorporation of a realistic anesthesia simulator into an anesthesia clerkship, Simulators in Anesthesiology Education. Edited by Henson L, Lee A, Basford A. New York: Plenum Press, 1998, pp 115–9
11.
Byrick RJ, Cleave-Hogg D, McKnight D: A crisis management program for residents in anesthesia. Acad Med 1998; 73: 592
12.
Gaba D, DeAnda A: A comprehensive anesthesia simulation environment: Re-creating the operating room for research and training. A nesthesiology 1988; 69: 387–93