Abstract
Debriefing after an actual critical event is an established good practice in medicine, but a gap exists between principle and implementation.
Failure to debrief after critical events is common among anesthesia trainees and likely anesthesia teams. Communication breakdowns are associated with a high rate of the failure to debrief.
Debriefing after an actual critical event is an established good practice in medicine, but a gap exists between principle and implementation. The authors’ objective was to understand barriers to debriefing, characterize quantifiable patterns and qualitative themes, and learn potential solutions through a mixed-methods study of actual critical events experienced by anesthesia personnel.
At a large academic medical center, anesthesiology residents and a small number of attending anesthesiologists were audited and/or interviewed for the occurrence and patterns of debriefing after critical events during their recent shift, including operating room crises and disruptive behavior. Patterns of the events, including event locations and event types, were quantified. A comparison was done of the proportion of cases debriefed based on whether the event contained a critical communication breakdown. Qualitative analysis, using an abductive approach, was performed on the interviews to add insight to quantitative findings.
During a 1-yr period, 89 critical events were identified. The overall debriefing rate was 49% (44 of 89). Nearly half of events occurred outside the operating room. Events included crisis events (e.g., cardiac arrest, difficult airway requiring an urgent surgical airway), disruptive behavior, and critical communication breakdowns. Events containing critical communication breakdowns were strongly associated with not being debriefed (64.4% [29 of 45] not debriefed in events with a communication breakdown vs. 36.4% [16 of 44] not debriefed in cases without a communication breakdown; P = 0.008). Interview responses qualitatively demonstrated that lapses in communication were associated with enduring confusion that could inhibit or shape the content of discussions between involved providers.
Despite the value of proximal debriefing to reducing provider burnout and improving wellness and learning, failure to debrief after critical events can be common among anesthesia trainees and perhaps anesthesia teams. Modifiable interpersonal factors, such as communication breakdowns, were associated with the failure to debrief.
Critical events in a hospital setting (e.g., cardiac arrest, difficult airway requiring an urgent surgical airway) carry a high level of stress and acuity.1 They often involve time-sensitive decisions with patients’ lives at stake. These events can be frequent in aggregate at large institutions, but their rarity at the individual provider level only adds to their complexity.2,3 These crises can place significant personal burden on the healthcare provider—a major concern in this era of examining the factors that contribute to provider burnout and wellness.
Critical event debriefing is a valuable tool in mitigating the negative impact of crisis events on healthcare providers.4–6 Postevent debriefing also offers opportunities for education and learning, including those that address all the core competencies from the Accreditation Council for Graduate Medical Education.7,8 In addition, it can be beneficial for quality assurance, ensuring immediate issues are addressed before the next patient is cared for, and determining need for longer-term follow-up.9 Despite debriefing’s utility, there persists a gap between evidence-based theory and practice.10,11 The reasons for this gap are poorly understood and rarely studied leveraging the richness of mixed methods. To clarify these reasons, this study sought to examine patterns of debriefing among anesthesiology residents and attendings providing complex care both within and beyond the operating rooms. Our intent was to determine, in the setting of actual patient care at a large academic medical center: (1) which critical events were taking place, as well as (2) how often proximal debriefing was taking place after these events. We hypothesized that there are quantifiable patterns in the types of events that are debriefed. The addition of qualitative semistructured interviews helped to add insight to the quantitative findings. Given the ubiquity of anesthesiologists at critical events all over a hospital/system, we anticipated that many of the lessons learned from this research could be generalizable to other dyads and other specialties. Through our mixed-methods approach, we also explore another pattern of behavior that has been long-embraced as essential in crisis resource management: communication.
Materials and Methods
Institutional Review Board approval was obtained from the University of Pennsylvania Office of Regulatory Affairs (Protocol #825918; Philadelphia), which included a waiver of written documentation of consent. The data-collection period spanned from October 2016 to June 2017 and February 2018 to April 2018 (a time period where there were 72 anesthesiology residents [Clinical Anesthesia year 1 to Clinical Anesthesia year 3] and more than 80 attending anesthesiologists). At a large academic medical center, anesthesiology residents were audited and queried for the occurrence of critical events, including operating room crises3 and disruptive behavior that undermined a culture of safety.12 Anesthesia attendings were permitted to provide data if they expressed interest to do so. Study participation was voluntary. Two methods of data collection were used: (1) a research assistant contacted residents/staff to learn of events that had taken place and was available to be contacted for future events; and (2) residents and staff who provided information on critical events and debriefs were invited to be further interviewed. A convenience sample of these volunteers were given a recorded, transcribed, and de-identified semi-structured interview. In addition to Institutional Review Board approval, consultation was done with both the Quality Improvement/Patient Safety team of the department, as well as institutional Risk Management (Supplemental Digital Content, https://links.lww.com/ALN/B880).
Quantitative Data Collection and Analysis
Research assistants (R.E.S., M.M.) served as data collectors and were introduced to the residents and members of the department by the Departmental Chair. The data collectors made themselves available in several ways, including being present in the post anesthesia care unit during the day and at handoffs between day and night shifts, and being available to contacted at any time (Supplemental Digital Content, https://links.lww.com/ALN/B880). Intensive care unit staff and residents on dedicated pain service or intensive care unit rotations were not sampled. While the physical presence of the data collector was limited by resource constraints, audits included weekdays and weekends, days and evenings, and residents at all levels (Clinical Anesthesia year 1 to Clinical Anesthesia year 3). Further, no attempt was made to contact any particular resident or location over another, and data collectors made themselves available to be contacted by the residents at any time.
Residents who participated in the audit were shown a list of events (Supplemental Digital Content, https://links.lww.com/ALN/B880) and were asked if any had taken place during their shift/call. Additional collected baseline data included the initial patient location, number of days the event occurred before the audit (for consistency, the end of the participant’s shift during which event occurred was considered “day 0,” even if it was a 24-h shift), and experience level of the participant (i.e., Clinical Anesthesia year or attending status). As the semistructured interviews were descriptive, for additional data privacy, the number of days the event occurred before the interview was not collected.
If a critical event was identified, study participants were asked if the event was debriefed during/shortly after the event/case or if there were at least some bare-minimum components of a proximal debriefing session, such as a short, dedicated conversation about the event, that included the study participant, during the care associated with it or soon thereafter. To avoid being overly strict or prescriptive in what the study participant may have considered to be a “debriefing,” the event was considered “debriefed” if the study participant stated that at least these bare-minimum components took place or explicitly stated that there was a debriefing session during the case or shortly thereafter. The event was considered “not debriefed” if the above did not take place, or if study participant explicitly noted that “no debriefing took place/occurred.”
During data collection, it was apparent that study participants were reporting issues with communication among personnel, even though this was not part of our prespecified list of critical events (Supplemental Digital Content, https://links.lww.com/ALN/B880). At the completion of data collection, all events, and transcripts were re-reviewed by an anesthesiologist with expertise in patient safety (A.F.A.) to determine if the event entailed a critical communication breakdown. This assessment was adapted from categories of communication failures previously described by Lingard et al.13,14 and informed by prior literature on communication breakdowns.15,16 Case summaries of all the events were blinded to whether the event was debriefed (and blinded to the assessment on communication breakdowns by A.F.A.), and they were then presented to a medical/linguistic anthropologist (J.T.C.) for an independent review of whether each case contained a critical communication breakdown. Any cases of disagreement were resolved by consensus among four of the authors (A.F.A., J.T.C., R.E.S., and M.M.). A quantitative comparison was then done to assess the proportion of cases that were debriefed by critical communication breakdown status. No a priori statistical power calculation was conducted regarding the relationship between communication and debriefing. The sample size for this was based on the available data and our previous experience with this design.16
Qualitative Data Collection and Analysis
Data collectors created a narrative of the event after discussing it with the study participant, which was then reviewed together with an attending anesthesiologist (A.F.A.). Study participants who noted a critical event were given the opportunity to have a semistructured interview about the event. Interviewees were asked to describe the event, what their reactions were, whether there was a debriefing/conversation during the case or shortly thereafter, what their thoughts were if there was no debriefing, and if they had anything else they wanted to add. Interviews were audio-recorded and transcribed by a professional transcription service. For three events, two different individuals involved in each event volunteered to be separately interviewed (i.e., six interviews for three events).
Qualitative data analysis was initiated midway through data collection in order to give the team the ability to assess thematic saturation and thereby determine when to cease interviewing. Theory development was ongoing throughout this process using an abductive approach in which propositions were neither assumed a priori nor observed, but rather developed through identifying the most parsimonious explanations for unexpected findings from among a field of competing theories.17 NVivo 11 (QSR International, Australia) was used to manage coding, which was undertaken by three research assistants (R.E.S., M.M., R.C. B.) overseen by an experienced qualitative researcher (J.T.C.) working in collaboration with the lead author (A.F.A.; see Supplemental Digital Content, https://links.lww.com/ALN/B880, for additional information on data collectors/reviewers, as well as more details on the qualitative coding process).
Statistical Analysis
Quantitative data were analyzed using Microsoft Excel and SAS version 9.4 (SAS Institute, USA). The number of days the critical event occurred before the audit were reported with median and interquartile range, and the number of critical events per patient were reported with median and overall range. All P values were 2-sided, and P values less than 0.05 were considered statistically significant. The assessment of communication breakdown rate by debriefing status was done using a Chi-square test. Interrater reliability for the variable coding whether the case contained a critical communication breakdown (i.e., the assessment of coding of this variable by A.F.A. vs. J.T.C.) was measured by calculating the simple κcoefficient.
Results
Quantitative Findings
After exclusion of two events because of insufficient information from the study participant, 89 events were identified during the course of the study period. A breakdown of the study population is shown in figure 1. All study participants opted to participate when approached (100% response rate). Of the 64 events identified by discussions without an associated interview transcript, the median number of days the discussion took place after the event was 0 (i.e., the day of the event or immediately after the call-shift; interquartile range, 0,2).
The event locations spanned throughout the hospital and included events occurring during the day and at night. The locations were categorized according to where the event started (table 1). Figure 2 shows a breakdown of the types of events identified. A total of 157 types of events occurred across 89 patients (median per patient, 2; range, 1 to 5). For the purposes of calculating a debriefing rate, events were only counted once per patient-related critical event episode even if parallel events occurred or more than one person was interviewed about the event. For example, if a patient had a period of significant or prolonged hypotension and/or hypoxemia, followed by cardiac arrest, in the setting of a critical communication breakdown, which was reported by two different study participants, this was only counted as one event. Using this conservative definition, only 49.4% of events (44 of 89) had a proximal debriefing as reported by study participants involved in the events (table 1).
Of the 89 events, more than 50% (45 of 89) contained a critical communication breakdown. There was excellent interrater reliability with respect to the assessment of a critical communication breakdown between the anesthesiologist/patient safety expert (A.F.A.) and the medical/linguistic anthropologist blinded to debriefing status (J.T.C.) (κ = 0.93). Illustrative vignettes of the types of communication breakdowns are shown in table 2. Of these 45 events, 80% (36 of 45) contained more than one type of breakdown in communication. Events containing at least one critical communication breakdown were strongly associated with not being debriefed (64.4% [29 of 45] not debriefed in events with a communication breakdown vs. 36.4% [16 of 44] not debriefed in cases without a communication breakdown; P = 0.008).
Qualitative Findings
We analyzed 25 events associated with 26 semistructured interviews (fig. 1). The interviews lent additional detail to the quantitative findings. Given the lengthy nature of interview excerpts, an expanded version of these qualitative results can be found in the Supplemental Digital Content (https://links.lww.com/ALN/B880). Residents described a range of lapses in effective communication. These lapses were often described as incurring communicative sequelae that persisted beyond the event.
Many residents related “stressful” or “confusing” circumstances born out of contradictory directives. Conflicting clinical opinions and approaches were reported as coming both from anesthesia faculty/residents, as well as other providers. For instance, this resident reports being given mutually exclusive directives from the surgery and anesthesia teams during a code, an occurrence attributed to an ambiguous decision-making hierarchy:
INTERVIEWER: [W]as anyone giving clear directives, or was there someone running the code?
ANESTHESIA RESIDENT (R): Unfortunately no. It was very disorganized […]. It wasn’t very clear who was in charge at that moment. And communication was very poor. On one hand the surgical team was saying don’t do compressions. And then we were saying to do compressions.
In other accounts, anesthesia attendings had “different opinions on what could’ve been done with the airway.” While residents acknowledged that tracheal intubation could be accomplished in multiple ways, they noted the frustration and confusion that can be inherent when the clinical scenario is complex. One resident detailed a particularly severe example:
R: [T]here was a lot of miscommunication between the teams. I think that between our team as anesthesia providers, we all didn’t listen to each other. I think the attending felt offended because a resident pretty much overruled him in front of everyone. And then, the resident felt like she was in a bad spot […]. She really wanted to do what’s best for the patient.
Here, the resident quoted was a bystander in this disagreement. The resident described the attending as having “tunnel vision,” “not receptive at all” to a different opinion (of note, the resident commented that input from other attendings was solicited over the course of the event, and there was not clear consensus until the situation became more urgent).
Residents also expressed frustration about instances in which seemingly basic communication issues, such as the ability to successfully contact providers during emergency situations, negatively impacted the team’s ability to solve problems. One anesthesia resident explained that during a protracted crisis event they “called people in [i.e., home-call cardiac attending anesthesiologist and home-call anesthesia resident rotating on cardiac], which was a mess because […] the [phone call attempt to the home-call] cardiac attending went to voicemail.” This resident subsequently called the wrong home-call anesthesia resident based on a misreading of the schedule (of note, phone call, and not paging, was the routine method of communication for the clinical scenario in question at the study institution). This scenario served as prelude to further downstream miscommunication:
R: And there was miscommunication, as there often is. I tried to mention to her [the cardiac nurse] that I had, in fact, called the cardiac [anesthesia] attending, but had not gotten in touch with her and left a voicemail. And then, I overheard her [the cardiac nurse] on the phone saying that we talked to cardiac anesthesiology. So I said to her [the cardiac nurse], I was like, sorry. Maybe you misunderstood me. And I think – I just think that message kept getting not heard…[later in the passage]…and he [the cardiac surgical fellow] was furious that cardiac anesthesia was not there…he kept saying, if anything happens, it’s on you guys, which is not helpful.
Additionally, interviewees emphasized the simple challenge of hearing/understanding colleagues during a code and in turn generating clear responses. The inability to communicate due to the hectic nature of codes arose in several resident interviews. With “lots of people yelling” and no “sense that there was a code leader,” residents sometimes found it nearly impossible to effectively communicate with colleagues:
R: I think it was so chaotic and people were – orders were coming from everywhere. One person over in the corner was saying something. […] [Someone] over here was saying something. Another one over here was saying something. So it was…very difficult to even communicate with anybody to be honest with you.
Irrespective of the resident level of experience, residents’ narratives were similar when it came to the “chaotic” code environment.
Discussion
Critical events can occur frequently in large hospitals, and we observed that only about half were associated with a proximal debriefing involving the anesthesiology resident. This is likely a gross underestimate, given our liberal definition of debriefing that included what some would consider its bare-minimum components. As this was a study predominately of anesthesia trainees and their attendings, the incidence of debriefing with the entire team may be even lower. Barriers extend well beyond production pressure and limited resources (e.g., time and space) to include challenging team dynamics.
Events containing a communication failure were significantly less likely to be debriefed. Over half of the crisis events contained at least one critical communication breakdown. This is also likely an underestimate of communication failures during critical events, as our study was limited to interviews (mostly from one individual involved) and brief narrative event descriptions. When the critical event centered on patient comorbidities/pathophysiology, the event was more likely to be debriefed. When the critical situation was (or contained) a critical communication breakdown, people were more likely to walk away without a proximal debriefing. The fact that events with communication failures are less likely to be debriefed may provide a “two-hit” hypothesis that could increase malpractice risk and compromise patient safety. Closed claims studies by both the American College of Surgeons and the American Society of Anesthesiologists have shown communication problems (both intraoperative and outside the operating room) to be a significant source of complications in malpractice claims.18–20 It may be easier for clinicians to discuss pathophysiology and medical facts than deal with human factors and team dynamics. This may, in part, reflect the timing for which national/standardized examinations emphasizing domains such as communication and professionalism have historically appeared relative to others.21,22 The fact that events centering on a communication breakdown are less likely to be debriefed highlights the value of preventing communication problems in the first place through medical education and patient-safety-based interventions such as checklists, handoffs, and simulation-based training.7,8,23–25 Crisis checklists, emergency manuals, and other cognitive aids have long been embraced as tools to improve crisis resource management and patient safety.3,26,27 There are simulation-based training programs in existence to improve debriefing skills, encourage higher-quality conversations, and excavate reasoning, which are directly relevant to the anesthesia provider.28 There are also organizations and departments that have created and/or implemented programs to improve various aspects of peer support.29–32 While there is value in distal (or “cold”)33 debriefing once members have been able to process an event, it does not remove the potential value of proximal debriefing. “Hot” debriefing has been described in the critical care literature as “tak[ing] place soon, often immediately, after the resuscitation attempt.”33 As the majority of our audits were performed within 2 days, our data inherently characterizes a short interval representing what is proximal. Individual institutions may benefit from local customization, consistent with cultural norms and available resources, regarding the distinction between these timeframes.
The results should be interpreted in the context of the study design and its limitations. This was a single-institution study; patterns observed may reflect the target institution’s culture and workflow. However, events occurred in a wide range of locations spanning well beyond the operating room. The ubiquitous nature of anesthesiology residents and the study of real events provided generalizability difficult to obtain from a multi-institutional study of just one clinical setting, or a simulation-based study debriefing hypothetical scenarios.20 There was also potential for selection bias, as events were not captured by random audits of call teams. A random-audit approach was quickly abandoned, as there was strong resident enthusiasm for the study, to the extent that residents were contacting data collectors in an unsolicited fashion. The combination of snowball sampling and availability of data collectors was effective at obtaining detailed information on nearly 90 crisis events over a relatively short time period. The fact that residents were this enthusiastic to speak about a sensitive topic is arguably a finding in and of itself, as it speaks to the timeliness/relevance of the topic, the desires of the current generation of residents, and departmental leadership buy-in.
In addition to peer-review protection/privileges (Supplemental Digital Content, https://links.lww.com/ALN/B880), we intentionally chose to protect subjects by limiting collection of information on the study participants themselves. There is precedent for this strategy regarding sensitive safety-related information. For the Australian Incident Monitoring Study, anesthesia providers were allowed to report, “on an anonymous and voluntary basis, any unintended incident which reduced, or could have reduced, the safety margin for a patient…anonymity and medicolegal safety are key factors in the success of [Australian Incident Monitoring Study].”34 While future studies could benefit from data-corroboration methods across team members, they would also have to address disadvantages from potential: smaller event sample-size, less willingness of residents (or others) to volunteer events, and added risk-management concerns when events are linked across corroborators to patient identifiers and the medical record. Our findings reflect that critical communication breakdowns were described by study participants more frequently in events that were not debriefed. A study design involving an a priori plan to ask residents about communication failures could serve as insightful future work. It is nevertheless compelling that critical communication breakdowns were so starkly on the minds of participants that they brought these issues up (with a strong association with the failure to debrief) in the absence of being specially asked. Since we did not audit/interview the corresponding attendings for most events, we cannot rule out the possibility that attendings may have offered or demonstrated a willingness to debrief that was not taken up by the resident. Nevertheless, it is unlikely that a thoughtful debriefing took place for a given event, followed by the resident denying it. While we observed an association between communication breakdowns and lack of debriefing, association does not necessarily mean causation. Nonetheless, the strength of the observed association, which was only reinforced by an in-depth qualitative analysis, offers a case for this association to be further explored. Last, there is the potential for recall bias, as not all events were reported to the data collector immediately after they happened. This was minimized by data collectors who made themselves widely available. The interquartile range for the number of days audits took place after the event ranged from the end of the shift when the event happened, to 2 days later. The fact that some events were discussed more distally provided the study strength that participants had a chance to reflect further on critical events they had experienced.
This work contributes to a growing effort across specialties to improve debriefing. We intentionally did not limit our debriefing definition to “Critical Incident Stress Debriefing,” which was initially popularized by Mitchell for emergency workers.6 To mitigate potential iatrogenic emotional effects from debriefing,35 institutions may derive value from local customization of the minimum event types and debriefing components appropriate for the nature and resources of their institution and staff. Practical guides for emergency department debriefing recommend standardizing the minimum number of event types to debrief in alignment with departmental goals, local needs, and priorities.36 While we favored an inclusive list including any event for which an individual involved desired/requested a debriefing (Supplemental Digital Content, https://links.lww.com/ALN/B880), the local customization of at least a bare-minimum list can allow for standardization. The mere standardization of offering to proximally debrief, despite the known barriers, may mitigate provider burnout, a topic that has received recent national attention.37–39 There is already literature advocating for routine debriefing at the end of surgical cases,23 with the success of such initiatives dependent on whether they are actually implemented.40,41 Interventions to facilitate and improve debriefing between surgical attendings and surgical residents42 are increasingly popular. Curricula containing debriefing after critical events have been noted in the nursing literature, and programs/studies exist for multidisciplinary teams in both the operating room and other settings.20,43,44 While debriefing with more team members may hypothetically have more barriers, there is evidence from real-time use that it can also improve efficiency and patient outcomes.45,46
The pendulum of a critical event’s impact swings on a continuum, where on one end is provider burnout, compromised patient safety, and adverse events, and on the other end is provider resilience and the ability for providers to serve as a buffer to protect patients from an imperfect system.47,48 Proximal debriefing after critical events, even if done briefly, may be an essential conduit to improve resilience and learning (at the individual, team, and systems level). Even among the earliest studies using similar methods to study critical events in anesthesiology, communication was observed to be a major theme.49,50 It has been 40 yr since the December 1978 landmark critical incident article by Cooper et al. stated that “…factors frequently associated with incidents were inadequate communication among personnel, haste or lack of precaution, and distraction.”49 Our study shows that failure to debrief after critical events can be common, particularly in association with inadequate communication among personnel. Given the broad potential impact of critical events on patients, providers, and healthcare systems, continued research on feasible, generalizable, and sustainable interventions for proximal support after these events is imperative.
Acknowledgments
The authors would like to thank the following individuals, all affiliated with the Department of Anesthesiology and Critical Care of the University of Pennsylvania Health System (Philadelphia, Pennsylvania) during different parts of the project, for their general support with various aspects of the work: Sydney Brown, M.D., Ph.D.; Joseph Mintz, M.D.; Joseph D. Pecha, B.S.; Levi J. Bowers, B.S.; Elizabeth A. Valentine, M.D.; Jesse M. Raiten, M.D.; Josh Cotton, M.S., Ph.D.; Tom Chaby, M.A.; Maryann Henry, C.R.N.A., M.S.; and Carlene Mclaughlin, C.R.N.A., M.S.N., Ph.D.
Research Support
Support was provided solely by institutional and/or departmental sources, including grants from the University of Pennsylvania (Penn) Center of Excellence for Diversity in Health Education Research (to Dr. Arriaga), Bach Fund (to Dr. Arriaga), and McCabe Fund (to Dr. Arriaga). The views expressed in this article are those of the authors and do not necessarily represent the official views of supporting entities.
Competing Interests
The authors declare no competing interests.