Through peer review, we separated the contributions of system error and human (anesthesiologist) error to adverse perioperative outcomes. In addition, we monitored the quality of our perioperative care by statistically defining a predictable rate of adverse outcome dependent on the system in which practice occurs and respondent to any special causes for variation.
Traditional methods of identifying human errors using peer review were expanded to allow identification of system errors in cases involving one or more of the anesthesia clinical indicators recommended in 1992 by the Joint Commission on Accreditation of Healthcare Organizations. Outcome data also were subjected to statistical process control analysis, an industrial method that uses control charts to monitor product quality and variation.
Of 13,389 anesthetics, 110 involved one or more clinical indicators of the Joint Commission on Accreditation of Healthcare Organizations. Peer review revealed that 6 of 110 cases involved two separate errors. Of these 116 errors, 9 (7.8%) were human errors and 107 (92.2%) were system errors. Attribute control charts demonstrated all indicators, excepting one (fulminant pulmonary edema), to be in statistical control.
The major determinant of our patient care quality is the system through which services are delivered and not the individual anesthesia care provider. Outcome of anesthesia services and perioperative care is in statistical control and therefore stable. A stable system has a measurable, communicable capability that allows description and prediction of the quality of care we provide on a monthly basis.
Key words: Health care: outcome assessment; process assessment. Quality assurance: peer review.
TO ensure that the quality of medical care is improving, the ability to measure quality is necessary. To date, most attempts to measure quality in the medical industry have made use of outcome data. For example, the Health Care Financing Administration publishes case mix-adjusted mortality rates for thousands of American hospitals each year.* This type of data includes information on variation in death rates from hospital to hospital, and from year to year in the same hospital. Medical quality assurance (QA) methods usually assume that there is a “special cause” for this variation that is specific to some group of health care providers, a particular provider, or a unique local condition. These QA methods tend to ignore the “common causes” of variation that are attributable to faults in the system, where “system” refers to all stable aspects of the health care environment. According to Deming, most sources of variation in quality of product or service and therefore most opportunities for improvement may be related to common causes of variation. [1].
Like other medical disciplines, QA committees in anesthesiology attempt to identify areas for improvement through peer review. Typically, peer review involves examination of the decision making process of a practitioner involved with an adverse outcome. If human error is discovered, the practitioner is reprimanded or reeducated. Failure to identify human error usually results in the case being dismissed as an unavoidable outcome.** If Deming is correct, [1] however, and most possibilities for improvement are related to common causes of variation, then our peer review process should look at faults in the system as critically as peers examine each other. In addition, industrial quality management tools, such as statistical process control charts, could be used to monitor product quality and variation (both common and special causes).***
To define the contribution of system faults to adverse anesthesia outcomes, the Department of Anesthesiology, State University of New York at Stony Brook, Stony Brook, New York, expanded the traditional methods of identifying human errors to allow the identification of system errors using peer review. At the same time, we subjected our outcome data to statistical process control analysis to monitor the quality of our anesthesia care by statistically defining a predictable rate of adverse outcome dependent on the system in which practice occurs and respondent to any special causes for variation. These two methods work toward delineating the major determinant of the quality of our perioperative patient care.
Materials and Methods
An accepted model of anesthesiology peer review [2] was modified to include system errors in the peer analysis process. These methods were applied to all cases at University Hospital during the calendar year 1992 that involved one or more of the anesthesia clinical indicators recommended at that time by the Joint Commission on Accreditation of Healthcare Organizations (JCAHO).**** Cases were examined through our peer review process for types of error. Attribute control charts were applied to the indicator data (outcome data) to identify both common causes and special causes of variation.
Data Collection
All cases exhibiting one of more of the original JCAHO anesthesia clinical indicators (Table 1) at University Hospital during 1992 (January 1-December 31) were referred to the Department of Anesthesiology. Sources for initial referral were the anesthesiologist (resident or attending), other clinical personnel (such as nurses or operating room technicians), the medical care review team (several trained chart reviewers employed by the hospital), or any combination of the three. Anesthesiologists reported occurrences of clinical indicators on a continuous basis by filing a written report with the department at the time of the occurrence. The anesthesiologists report included a narrative of the events and an analysis of the errors involved. Other clinical personnel submitted traditional “incident reports” directly to the department or indirectly through the medical care review team. The medical care review team screened incident reports and examined the medical records of inpatients within 24 h of admission or surgery and at least every 4 days thereafter. Cases meeting indicator criteria discovered by the medical care review team were reported to the department on a monthly basis and therefore served as an extradepartmental fail-safe measure for detection of indicator occurrence in inpatients. Similarly, clinical indicators occurring postoperatively in ambulatory surgical patients were detected by clinical personnel through a follow-up telephone call on the 1st postprocedure day, response to a written survey, or on readmission to the hospital. The number of cases referred to the department, the initial source(s) of each referral, and the clinical indicator(s) involved were recorded each month. A single case could produce two or more clinical indicators and be referred from multiple sources. Referrals received after a particular case had been discussed by the department QA committee were discarded unless new information was provided.
Each case was reviewed by the preliminary QA committee, consisting of two anesthesiologists from the Department of Anesthesiology, to see that the inclusion criteria were met. Contact was made with the anesthesiologist involved or the medical record was reviewed so that an abstract could be prepared for presentation to the department QA committee. The department QA committee included all attending faculty and residents (approximately 25 staff anesthesiologists and 36 resident anesthesiologists) who met on a monthly basis to participate in peer review of the cases reported to date and to reach a consensus regarding the error analysis. Figure 1provides an overview of the quality management plan and flow of data within our institution. This data collection system was in place for several years before our study and remained unchanged throughout the study period.
Peer Review
The principle underlying our peer review process conducted by the department QA committee is that all adverse outcomes, or clinical indicators, are the result of error, either “human error” or “system error.” Nominal definitions for subcategorizing these two types of errors were created to add structure and increase the objectivity of the peer review process. Human errors included the following: failing to perform a technique properly, misuse of equipment, disregarding available data, failing to seek appropriate data, and responding incorrectly to the data because of a lack of knowledge. System errors included accidental occurrences resulting from performing a technique correctly, equipment failure despite proper use, missed communication while following established protocol, inability to correct a disease process with our current standards of care, inability to detect a disease process with our current screening and monitoring standards, and inability to meet the demand for resources of equipment or personnel. The supervisory capacity of an attending anesthesiologist working with more than one resident or nurse anesthetist was viewed as a unique resource whose limitations were recorded separately from other resources. The types of errors are summarized in Table 2and Table 3with common examples of each.
At least one error was attributed to each case involving one or more indicators. If two or more different errors occurred, each error was counted separately to determine the distribution of all errors occurring in 1 yr. Failure to reach a consensus among members of the department QA committee regarding the type of errors involved with an adverse outcome was resolved through majority opinion.
Statistical Process Control
The frequency of each clinical indicator was plotted monthly on a process control chart. The control chart used was an “attribute p chart,” which reflects the number of defective characteristics (indicators) as a proportion of variable sample size. [3] The monthly sample size for each indicator, except post-dural-puncture headache and unplanned hospital admission of an ambulatory surgical patient, was the total number of anesthetics performed at University Hospital. For post- dural-puncture headaches, the sample size was the total number of neuraxial anesthetics performed, and for unplanned hospital admissions among ambulatory surgical patients, the sample size was the total number of ambulatory cases. “Upper control limits”(3 SD from the average proportion defective) and “upper warning limits”(2 SD from the average proportion defective) were established based on a binomial distribution.***** Systems were considered “out of control” if a point fell outside of the control limits or a run or trend was detected. A “run” is a succession of seven points that are above or below the average; a “trend” is a succession of seven points that is rising or falling. In a system without special causes for variation, a run or trend has approximately the same probability of occurring as a point outside a control limit, 0.005. [3].
Results
The department performed 13,389 anesthetics from January 1 to December 31, 1992. The QA committee received 114 referrals about 110 cases, involving 119 clinical indicators (Figure 2). The source of referrals is shown for each trimester of 1992 in Figure 3. From January 1 to April 30, 65% of all occurrences were self-reported; from May 1 to August 31, 74%; and from September 1 to December 31, 88%.
Peer review revealed that 6 of the 110 cases involved two separate errors, making the total number of errors 116. Of these, 9 (7.8%) were judged to be human errors and 107 (92.2%) were considered system errors. The distribution of errors is shown in Figure 4. The frequency of occurrence of each clinical indicator per month was plotted on a statistical process control char (attribute p chart). No runs or trends were detected during the sample period. Only one occurrence (pulmonary edema occurring within 1 postprocedure day) was plotted outside of the upper control limits. Examples of the attribute control charts for 6 of the 1.0 clinical indicators are shown in Figure 5.
Discussion
In this study we considered the 1992 JCAHO anesthesia clinical indicators as occurrence markers for cases to be identified for peer review. These indicators were issued in 1988 anti have since undergone two phases of testing: alpha and beta. Alpha Testing was designed to evaluate indicators for “face validity” and feasibility of data collection in a limited number of health care organizations. After successfully completing the alpha phase, all of these indicators were subjected to beta testing. The beta testing phase was designed to evaluate similar characteristics in a broader range of health care organizations. [4] At the start of our study, the 13 anesthesia clinical indicators chosen were in the beta testing phase. Since the completion of the beta phase in 1993, the 13 anesthesia clinical indicators have been reduced by the JCAHO to five perioperative performance indicators in an effort to make them applicable to a broader range of institutions [4] and to emphasize that these adverse outcomes are not specific to errors in anesthesia care.****** Because the original clinical indicators continue to have face validity in their ability to reflect major concerns regarding patient care, and because we encountered no difficulties with our data collection methods (in accord with institutions participating in the alpha phase), we have continued to apply our methods to all 13 indicators.
Because of the perceived punitive nature of peer review being targeted at human error, lack of self-reporting represents a problem for case identification. This has resulted in uncertainty about the rate of occurrences and raised questions about the veracity of peer review. [5,6] In response, hospital management and public oversight organizations have resorted to the use of special mechanisms such as independent chart reviewers and other regulatory measures to improve data collection for peer review. [7,8] By looking at the system as critically as we look at each other, the anesthesiologists in our department begin to share the responsibility with management for delivering quality health care, thus making quality control through peer review less threatening. Evidence for this is exemplified by the increase in the percentage of cases in which the initial referral source included the health care provider from 65% to 74% to 88% respectively for each successive trimester of 1992 (Figure 2). Thus members of the department considerably increased the amount of self-reporting. Also of note, 89% of the occurrences involving human error were self-reported by the physician. According to Deming, the basis for transformation to successful quality management in America must include a plan to “create constancy of purpose toward improvement of product and services” and to “drive out fear” of inspection, which results in defensive attitudes and distorted data. [9].
The reliability of peer assessments of quality of care has undergone critical examination. [5,6] We incorporated several proposals into our peer review process that appear to have potential for improving reliability. Use of multiple reviewers who meet to discuss the case has been shown to markedly increase consensus among group members. [10],******* During the course of this study, the faculty of our department remained relatively constant so that the members of our peer review group remained stable. Structured assessment procedures have also been recommended to decrease differences in reviewers' understanding of their task and thus to increase the objectivity of implicit peer review. [11,12] By using nominal definitions for categorizing peer review opinions regarding adverse outcomes our error analysis was relatively easy to identify and group. Furthermore, during the application of this form of error analysis, the categories became more sharply defined than during initial introduction by means of a casuistic process. Studies also suggest that use of outcome data increases the reliability of peer assessments. [13–15],******** Currently, almost all QA methods use some form of peer judgments to assess quality. Given the widespread acceptance of peer review we believe that modifying the process to improve its reliability and expand its scope is a better alternative to replacement.
Our peer review process examined both system errors and human errors. Many of the errors identified as system errors were those that ordinarily would have been considered as unavoidable and discarded. By including these occurrences in our peer review and defining them as system errors, they provide additional information on causative factors contributing to adverse outcome and allow improved quality by their elimination. In fact, system errors identified by our peer review process account for over 90% of our errors. Another way to consider this is that without looking at system errors the vast majority of causes for adverse outcomes as determined through peer review would have been excluded. Hence the major possibility for improvement in quality of patient care would be excluded. Human error, in contrast, contributed only a small portion to adverse outcome (less than 10%), but in the past dictated the major focus of QA measures. In other words, if all human error had been removed, it would have had only a small effect on the overall quality of care (indicator occurrence) when compared with the effect of removing all system errors. Our experience is consistent with Deming's contention that in considering possibilities for quality improvement “94% belong to the system (responsibility of management) 6% special.”[1] Our finding that 92% of the errors belong to the system (Figure 4) suggests that our previous quality improvement efforts and resources have been misdirected.
State and federal government agencies have gone to great expense to establish databases of adverse outcomes and the health care providers held accountable for those outcomes. [16–18],********* Practitioners have also contributed to these efforts by drawing conclusions from closed claims analyses [19–24] and perpetuating peer review practices biased toward human errors through exclusion of other more common types of error. [2],********* If system errors (traditionally considered unavoidable) are not excluded from the database, death is the most frequent adverse outcome of all the clinical indicators reviewed (Figure 2). Conversely, dental injuries, which we found to be among the least frequent occurrences, were the most common adverse outcome in previous closed claims data analysis. [25] We are not suggesting that human errors should be overlooked; only that currently, consideration of their effects on quality is vastly overestimated and misleading if our experience in a university-based, resident teaching program can be generalized.
The use of statistical process control charts adapts a well- known industrial tool for monitoring product quality. When used in industry, control charts provide a dynamic rate-based look at the mean occurrence of a monitored product or service feature with statistically determined limits of expected variation. “Attribute” control charts are used when the feature reflects qualitative characteristics (e.g., defective vs. not defective). A “p” chart was chosen because the number of defectives (indicators) were plotted as a proportion of sample size, which varied from month to month. Control charts allow statistical criteria to be applied to distinguish common cause variation from special cause variation. Common cause is a source of random variation inherent in the process itself or the tool used to measure the process. Special cause, on the other hand, is a source of variation that is unpredictable, intermittent, and attributable to someone or some special event. The type of action required to reduce special causes of variation is different from that required to reduce variation inherent in the system, and confusing the two sources can result in increased variability. [1] The worker, or anesthesiologist in our case, may be able to reduce special cause variation, but cannot improve a stable system by individual action. Improving a stable system is the responsibility of management (health care leaders) and requires changing the processes by which we render care. Control charts ensure that the appropriate action is taken only when there is clear evidence that it is required and they lessen the possibility of precipitating trouble by reacting to normal sampling variation. When all special causes of variation have been eliminated and only common cause variation remains, the system is said to be stable or in statistical control. A system that is in control has a statistically definable “process capability.” In other words, the system's performance is predictable and has a measurable, communicable capability. [1] This is not meant to imply that statistical control is the end goal of our efforts. A system can certainly be stable and still be of poor quality (i.e., an increased mean occurrence rate of adverse outcomes with minimal variation). Our attribute p charts show all systems in statistical control with the exception of the processes resulting in pulmonary edema within 1 postprocedure day.
Our control charts' demonstration that most processes leading to adverse outcomes (indicators) are stable appears consistent with the findings from our expanded peer review model. Nearly all system errors in our model could be considered to be examples of common cause variation. Human errors, typically identified by traditional peer review mechanisms, are more likely to result in special cause variation if left unchecked. Therefore, eliminating special cause variation has been the primary function of traditional QA and peer review in the health care industry for many years and may be responsible for our stable systems. Further improvement in the quality of a stable system requires process changes and continued use of statistical control methods is necessary to monitor the effect of these changes on the quality of care provided.
In searching for a special cause of variation as indicated by statistical process control analysis of the processes that resulted in pulmonary edema in a single patient during March, we found that a 72- yr-old woman was brought to the operating room in cardiogenic shock caused by an acute myocardial infarction resulting in a ventricular septal defect. Despite heroic resuscitative efforts and surgical intervention, the patient expired on the 3rd postoperative day. Although judged to be a system error (inability to correct a disease process with our current standards of care) by our peer review process, it is still possible to consider this a special cause of variation, analogous to the situation in which an industrial worker feels required to proceed with production despite the belief that the materials to be used are defective. In this case, for example, extraordinary care that proved to be futile was extended to a patient. Much more outcome data would have to be reviewed before changing our practice of making extraordinary efforts to save a life; however, the growing emphasis in the medical industry on cost containment certainly raises questions that need to be addressed.
In summary, our data show that the major determinant of our patient care quality, is the system through which services are delivered and not the individual anesthesia care provider. We have also demonstrated that outcome of perioperative care in our system is in statistical control and therefore stable. A stable system has a measurable, communicable capability allowing us to describe, in an agreed-on fashion, the quality of the patient care we provide on a monthly basis. No capability can be ascribed to a process that is unstable, [1] demonstrating that statistical control is likely to provide a necessary preliminary step to quality improvement tactics such as benchmarking [26] or instituting practice guidelines [27] that require measurement of quality and consistency to identify the best health care practices. Statistical control also means that costs are predictable, including all costs inherent to the system, those paid to “external customers” such as insurance payments and malpractice claims, and those paid to our “internal customers” such as those incurred from unplanned hospital or intensive care unit admissions. Statistical control is not the end goal. Once statistical control is demonstrated, however, health care leaders and physicians from all specialties can begin to institute efforts to improve the quality of delivered health care by measures aimed at improving a stable and defined health care system.
*Medicare Hospital Mortality Information. GPO 1987 0–196860. Health Care Financing Administration, 1988.
**Vitez T: Judging clinical competence. American Society of Anesthesiologists, 1989.
***Brassard M: The Memory Jogger. Methuen, MA, GOAL/QPC, 1988.
****Accreditation Manual for Hospitals. Oakbrook Terrace: Joint Commission on Accreditation of Healthcare Organizations, 1992.
*****Process Control Chart Tool Kit. Boise, ID, Sof-Ware Tools.
******Gabel R: Evolution of Joint Commission Anesthesia Clinical Indicators. American Society of Anesthesilogists Newsletter 58:24–28, 1994.
*******Ludke RL, Wakefield DS, Booth BM, Kern DC: Pilot study of nonacute utilization of VAMC inpatient service. Final report, SDR 87- 003. Washington, DC, United States Department of Veterans Affairs, 1990.
********Brook RH: Quality of care assessment: A comparison of five methods of peer review. HRA 74–3100. Washington, DC, United States Department of Health, Education, and Welfare, 1973.
*********Gellhorn A, Cherkasky M: Report of the New York State Advisory Committee on physician recredentialing: Phase one— general principles, proposed process, recommendations. Department of Health. New York State, 1988.