Skip Nav Destination
Close Modal
Update search
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
NARROW
Format
Article Type
Topics
Tags
TOC Heading
Date
Availability
1-20 of 25
Jeffrey B. Cooper
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Articles
Articles
Journal:
Anesthesiology
Anesthesiology. September 2018; 129(3):402–405
Published: September 2018
Abstract
Teamwork is now recognized as important for safe, high-quality perioperative care. The relationship in each surgeon–anesthesiologist dyad is perhaps the most critical element of overall team performance. A well-functioning relationship is conducive to safe, effective care. A dysfunctional relationship can promote unsafe conditions and contribute to an adverse outcome. Yet, there is little research about this relationship, about what works well or not well, what can be done to optimize it. This article explores functional and dysfunctional aspects of the relationship, identifies some negative stereotypes each profession has of the other and calls for research to better characterize and understand how to improve working relationships. Suggestions are given for what an ideal relationship might be and actions that surgeons and anesthesiologists can take to improve how they work together. The goal is safer care for patients, and more joy and meaning in work for surgeons and anesthesiologists.
Articles
Richard H. Blum, M.D., Sharon L. Muret-Wagstaff, Ph.D., John R. Boulet, Ph.D., Jeffrey B. Cooper, Ph.D., Emil R. Petrusa, Ph.D.
Journal:
Anesthesiology
Anesthesiology. April 2018; 128(4):821–831
Published: April 2018
Abstract
Background Obtaining reliable and valid information on resident performance is critical to patient safety and training program improvement. The goals were to characterize important anesthesia resident performance gaps that are not typically evaluated, and to further validate scores from a multiscenario simulation-based assessment. Methods Seven high-fidelity scenarios reflecting core anesthesiology skills were administered to 51 first-year residents (CA-1s) and 16 third-year residents (CA-3s) from three residency programs. Twenty trained attending anesthesiologists rated resident performances using a seven-point behaviorally anchored rating scale for five domains: (1) formulate a clear plan, (2) modify the plan under changing conditions, (3) communicate effectively, (4) identify performance improvement opportunities, and (5) recognize limits. A second rater assessed 10% of encounters. Scores and variances for each domain, each scenario, and the total were compared. Low domain ratings (1, 2) were examined in detail. Results Interrater agreement was 0.76; reliability of the seven-scenario assessment was r = 0.70. CA-3s had a significantly higher average total score (4.9 ± 1.1 vs . 4.6 ± 1.1, P = 0.01, effect size = 0.33). CA-3s significantly outscored CA-1s for five of seven scenarios and domains 1, 2, and 3. CA-1s had a significantly higher proportion of worrisome ratings than CA-3s (chi-square = 24.1, P < 0.01, effect size = 1.50). Ninety-eight percent of residents rated the simulations more educational than an average day in the operating room. Conclusions Sensitivity of the assessment to CA-1 versus CA-3 performance differences for most scenarios and domains supports validity. No differences, by experience level, were detected for two domains associated with reflective practice. Smaller score variances for CA-3s likely reflect a training effect; however, worrisome performance scores for both CA-1s and CA-3s suggest room for improvement.
Articles
Matthew B. Weinger, M.D., M.S., Arna Banerjee, M.B.B.S., Amanda R. Burden, M.D., William R. McIvor, M.D., John Boulet, Ph.D., Jeffrey B. Cooper, Ph.D., Randolph Steadman, M.D., M.S., Matthew S. Shotwell, Ph.D., Jason M. Slagle, Ph.D., Samuel DeMaria, Jr., M.D., Laurence Torsher, M.D., Elizabeth Sinz, M.D., M.Ed., Adam I. Levine, M.D., John Rask, M.D., Fred Davis, M.D., Christine Park, M.D., David M. Gaba, M.D.
Journal:
Anesthesiology
Anesthesiology. September 2017; 127(3):475–489
Published: September 2017
Abstract
Background We sought to determine whether mannequin-based simulation can reliably characterize how board-certified anesthesiologists manage simulated medical emergencies. Our primary focus was to identify gaps in performance and to establish psychometric properties of the assessment methods. Methods A total of 263 consenting board-certified anesthesiologists participating in existing simulation-based maintenance of certification courses at one of eight simulation centers were video recorded performing simulated emergency scenarios. Each participated in two 20-min, standardized, high-fidelity simulated medical crisis scenarios, once each as primary anesthesiologist and first responder. Via a Delphi technique, an independent panel of expert anesthesiologists identified critical performance elements for each scenario. Trained, blinded anesthesiologists rated video recordings using standardized rating tools. Measures included the percentage of critical performance elements observed and holistic (one to nine ordinal scale) ratings of participant’s technical and nontechnical performance. Raters also judged whether the performance was at a level expected of a board-certified anesthesiologist. Results Rater reliability for most measures was good. In 284 simulated emergencies, participants were rated as successfully completing 81% (interquartile range, 75 to 90%) of the critical performance elements. The median rating of both technical and nontechnical holistic performance was five, distributed across the nine-point scale. Approximately one-quarter of participants received low holistic ratings ( i.e. , three or less). Higher-rated performances were associated with younger age but not with previous simulation experience or other individual characteristics. Calling for help was associated with better individual and team performance. Conclusions Standardized simulation-based assessment identified performance gaps informing opportunities for improvement. If a substantial proportion of experienced anesthesiologists struggle with managing medical emergencies, continuing medical education activities should be reevaluated.
Articles
David L. Hepner, M.D., M.P.H., Alexander F. Arriaga, M.D., M.P.H., Sc.D., Jeffrey B. Cooper, Ph.D., Sara N. Goldhaber-Fiebert, M.D., David M. Gaba, M.D., William R. Berry, M.D., M.P.H., M.P.A., Daniel J. Boorman, B.S., Angela M. Bader, M.D., M.P.H.
Journal:
Anesthesiology
Anesthesiology. August 2017; 127(2):384–392
Published: August 2017
Abstract
Crisis checklists and emergency manuals are cognitive aids that help team performance and adherence to evidence-based practices during operating room crises. Resources to enable local implementation and training (key for effective use) are linked at http://www.emergencymanuals.org.Supplemental Digital Content is available in the text.
Articles
Randolph H. Steadman, M.D., M.S., Amanda R. Burden, M.D., Yue Ming Huang, Ed.D., M.H.S., David M. Gaba, M.D., Jeffrey B. Cooper, Ph.D.
Journal:
Anesthesiology
Anesthesiology. May 2015; 122(5):1154–1169
Published: May 2015
Abstract
Background: This study describes anesthesiologists’ practice improvements undertaken during the first 3 yr of simulation activities for the Maintenance of Certification in Anesthesiology Program. Methods: A stratified sampling of 3 yr (2010–2012) of participants’ practice improvement plans was coded, categorized, and analyzed. Results: Using the sampling scheme, 634 of 1,275 participants in Maintenance of Certification in Anesthesiology Program simulation courses were evaluated from the following practice settings: 41% (262) academic, 54% (339) community, and 5% (33) military/other. A total of 1,982 plans were analyzed for completion, target audience, and topic. On follow-up, 79% (1,558) were fully completed, 16% (310) were partially completed, and 6% (114) were not completed within the 90-day reporting period. Plans targeted the reporting individual (89% of plans) and others (78% of plans): anesthesia providers (50%), non-anesthesia physicians (16%), and non-anesthesia non-physician providers (26%). From the plans, 2,453 improvements were categorized as work environment or systems changes (33% of improvements), teamwork skills (30%), personal knowledge (29%), handoff (4%), procedural skills (3%), or patient communication (1%). The median word count was 63 (interquartile range, 30 to 126) for each participant’s combined plans and 147 (interquartile range, 52 to 257) for improvement follow-up reports. Conclusions: After making a commitment to change, 94% of anesthesiologists participating in a Maintenance of Certification in Anesthesiology Program simulation course successfully implemented some or all of their planned practice improvements. This compares favorably to rates in other studies. Simulation experiences stimulate active learning and motivate personal and collaborative practice improvement changes. Further evaluation will assess the impact of the improvements and further refine the program. Abstract In a review of 634 Maintenance of Certification in Anesthesiology Program simulation course participants, 94% successfully implemented some or all of their planned practice improvements, which focused mostly around environment or systems changes, teamwork skills, and personal knowledge.
Articles
Richard H. Blum, M.D., John R. Boulet, Ph.D., Jeffrey B. Cooper, Ph.D., Sharon L. Muret-Wagstaff, Ph.D.
Journal:
Anesthesiology
Anesthesiology. January 2014; 120(1):129–141
Published: January 2014
Abstract
Background: Valid methods are needed to identify anesthesia resident performance gaps early in training. However, many assessment tools in medicine have not been properly validated. The authors designed and tested use of a behaviorally anchored scale, as part of a multiscenario simulation-based assessment system, to identify high- and low-performing residents with regard to domains of greatest concern to expert anesthesiology faculty. Methods: An expert faculty panel derived five key behavioral domains of interest by using a Delphi process (1) Synthesizes information to formulate a clear anesthetic plan; (2) Implements a plan based on changing conditions; (3) Demonstrates effective interpersonal and communication skills with patients and staff; (4) Identifies ways to improve performance; and (5) Recognizes own limits. Seven simulation scenarios spanning pre-to-postoperative encounters were used to assess performances of 22 first-year residents and 8 fellows from two institutions. Two of 10 trained faculty raters blinded to trainee program and training level scored each performance independently by using a behaviorally anchored rating scale. Residents, fellows, facilitators, and raters completed surveys. Results: Evidence supporting the reliability and validity of the assessment scores was procured, including a high generalizability coefficient (ρ 2 = 0.81) and expected performance differences between first-year resident and fellow participants. A majority of trainees, facilitators, and raters judged the assessment to be useful, realistic, and representative of critical skills required for safe practice. Conclusion: The study provides initial evidence to support the validity of a simulation-based performance assessment system for identifying critical gaps in safe anesthesia resident performance early in training.
Articles
Simulation Training and Assessment: A More Efficient Method to Develop Expertise than Apprenticeship
Journal:
Anesthesiology
Anesthesiology. January 2010; 112(1):8–9
Published: January 2010
Articles
Articles
Articles
Articles
Articles
Jakob T. Moller, M.D., Tom Pedersen, M.D., Lars S. Rasmussen, M.D., Per F. Jensen, M.D., Bente D. Pedersen, M.D., Odd Ravlo, M.D., Niels H. Rasmussen, M.D., Kurt Espersen, M.D., Nils W. Johannessen, M.D., Jeffrey B. Cooper, Ph.D., Joachim S. Gravenstein, M.D., Bent Chraemmer-Jorgensen, M.D., Finn Wiberg-Jorgensen, M.D., Mogens Djernes, M.D., Lars Heslet, Ph.D., Sophus H. Johansen, M.D.
Journal:
Anesthesiology
Anesthesiology. March 1993; 78(3):436–444
Published: March 1993
Articles
Jakob T. Moller, M.D., Nils W. Johannessen, M.D., Kurt Espersen, M.D., Odd Ravlo, M.D., Bente D. Pedersen, M.D., Per F. Jensen, M.D., Niels H. Rasmussen, M.D., Lars S. Rasmussen, M.D., Tom Pedersen, M.D., Jeffrey B. Cooper, Ph.D., Joachim S. Gravenstein, M.D., Bent Chraemmer-Jørgensen, M.D., Mogens Djernes, M.D., Finn Wiberg-Jørgensen, M.D., Lars Heslet, M.D., Ph.D., Sophus H. Johansen, M.D.
Journal:
Anesthesiology
Anesthesiology. March 1993; 78(3):445–453
Published: March 1993
Articles
Articles
Articles
Articles
Articles
Jeffrey B. Cooper, Ph.D., David J. Cullen, M.D., Roberta Nemeskal, R.N., David C. Hoaglin, Ph.D., Clifford C. Gevirtz, M.D., Marie Csete, M.D., Claudia Venable, M.D.
Journal:
Anesthesiology
Anesthesiology. November 1987; 67(5):686–694
Published: November 1987
Articles
Advertisement
Advertisement