“... how in fact could we pursue this novel approach [using electronic anesthesia records to assess resident competency]?”
MAKING accurate judgments of clinical performance has taxed the brains and ingenuity of educational researchers, clinicians, and accreditation bodies for decades. The advent of electronic anesthesia records opens a new avenue for the assessment of resident competency, which has immediate appeal. The anesthesia record is, more or less, indisputable, the data already exist in an electronic format, and reports can be generated automatically. There is an extensive sampling across many cases—in fact the sample could be all cases the resident has ever done. With no case sampling error, no error introduced by examiners’ whims and biases, and no requirement for examiner training, electronic records could truly represent a major advance in assessment of resident workplace competence. There is even potential for automated feedback, enabling residents to compare their performance with professional norms. This ticks a lot of boxes for a useful assessment.1
In their article investigating the associations between resident management of intraoperative blood pressure and existing measures of competence, Sessler et al.2 exploit new research opportunities created by electronic anesthesia records. With evidence on the association of intraoperative hypotension and postoperative outcomes arising from the analysis of big data sets, they propose a novel approach that intuitively appeals, on the face of it, as an obvious measure of resident competence. Sessler et al.2 analyzed the systolic blood pressure (SBP) recordings from residents’ anesthesia records and calculated how long each resident’s patients spent with an SBP of less than 70 mmHg. They then looked for correlations between blood pressure management and resident rankings from the local Clinical Competency Committee (CCC) and in-training exams (ITE). However, they failed to show any association between what appears to be long periods of hypotension and resident scores from the CCC or ITE. They conclude that although they have not shown an association, this still may be a useful avenue to pursue for assessing anesthesiologists’ competence. So how in fact could we pursue this novel approach to assessment? It appears to me that there are a number of questions to consider as a starting point.
First, what is a good blood pressure? One hopes that these residents were not regularly causing harm to their patients by tolerating these periods of hypotension, so what evidence is there that their management was substandard? The American Association of Anesthesiologists’ clinical practice guidelines recommend measuring the blood pressure at least every 5 min, but provide no recommendation on what blood pressure to aim for. Monk et al.3 noted that “Despite the widely assumed importance of blood pressure management on postoperative outcomes, there are no accepted definitions for intraoperative blood pressure levels requiring intervention.” In their retrospective cohort study of 18,756 patient records, they found an increased 30-day postoperative mortality in patients who had SBP less than 67 to 70 mmHg for longer than 5 to 8 min. Walsh et al.,4 in a review of 33,330 electronic records, demonstrated an association between increasing length of time with blood pressure less than mean arterial pressure 55 mmHg and subsequent acute myocardial infarction or acute kidney injury. These studies provide good evidence of an association between hypotension and postoperative mortality and morbidity, but both are observational studies, there is no evidence of causality, and the benefits of intervening to increase the blood pressure are not established. In fact, Hirsch et al.5 found that fluctuations in blood pressure, but not hypotension (mean arterial pressure less than 50 mmHg), were associated with postoperative delirium in a study of 594 elderly patients. Desirable targets for blood pressure in young, healthy, anesthetized patients, presumably those patients more likely to be managed by junior residents without close supervision, may differ from those in older, sicker populations. So, before introducing specific patient management parameters for assessment, residents could reasonably expect clear guidance on best clinical practice and the criteria against which they are to be assessed.
Second, assuming we all agree that the SBP should be kept above 70 mmHg, how would we go about validating this as a measure of resident competence? Establishing the validity of a new assessment tool involves collecting evidence from a range of sources. Comparing a novel assessment with established assessments, as done by Sessler et al., is a common approach.6 The problem, however, is finding the definitive standard for assessment. CCCs are tasked with assessing residents against the six Accreditation Council for Graduate Medical Education competencies: patient care, medical knowledge, professionalism, interpersonal communication, practice-based learning/personal improvement, and system-based practice/system improvement.7 Although the CCCs may use valid criteria as the basis of their judgments, the reliability of such an assessment, by its nature, is likely to be relatively low. The ITE is a 250-item multiple choice exam testing medical knowledge and would, with this number of items, most likely be reliable, but its validity as a measure of clinical competence in the workplace is untested. So a lack of correlation between blood pressure management and CCCs or ITE may not in fact help us decide on the value of blood pressure management as a measure of resident competence.
Another approach to establishing the validity of a new assessment is to determine whether those who you expect to perform well on the assessment do in fact perform well compared with those who you would expect to perform poorly, that is, experts should get higher scores than novices. To establish whether this is indeed the case for blood pressure management, we could look at the anesthesia records of board-certified anesthesiologists. Adjusting for patient case mix, how do blood pressure parameters from electronic patient records of experts compare with those from residents? A post hoc analysis in the study by Sessler et al. tantalizingly showed that the duration of intraoperative hypotension decreased with increasing months of anesthesia training. This could be a reasonable way of collecting validity evidence of blood pressure management as a measure of competence.
Finally, where does blood pressure management fit in the bigger picture of a resident assessment program? CCCs collect evidence across the Accreditation Council for Graduate Medical Education competencies and from many sources. Likewise, the 250 items in the ITE will be measuring knowledge across a wide range of domains. Intraoperative blood pressure management could indeed be a milestone which anesthesiology residents should achieve, but by comparison to CCC and ITE assessments, is testing a relatively narrow aspect of patient care. Focusing on readily available data such as blood pressure management, while tempting because of ease of access and measurement, could, in fact, have perverse consequences as residents direct their attention to reaching prescribed targets at the expense of other, less easily measured aspects of patient care.
Analysis of the large amount of data in electronic anesthesia records points the way for future improvements in patient care and perhaps also offers an exciting new approach to workplace assessment. It is entirely plausible that differences in practitioner skill could translate into differences in actions documented in the anesthesia record, which in turn affect the physiology of the patient. Although Sessler et al. focused on blood pressure management, more sophisticated analyses of electronic patient records could indeed provide valuable information about resident (or attending) competence.
The author is not supported by, nor maintains any financial interest in, any commercial activity that may be associated with the topic of this article.