The Cardiac Anesthesia Risk Evaluation (CARE) score is a simple risk classification for cardiac surgical patients. It is based on clinical judgment and three clinical variables: comorbid conditions categorized as controlled or uncontrolled, surgical complexity, and urgency of the procedure. This study compared the CARE score with the Parsonnet, Tuman, and Tu multifactorial risk indexes for prediction of mortality and morbidity after cardiac surgery.
In this prospective study, 3,548 cardiac surgical patients from one institution were risk stratified by two investigators using the CARE score and the three tested multifactorial risk indexes. All patients were also given a CARE score by their attending cardiac anesthesiologist. The first 2,000 patients served as a reference group to determine discrimination of each classification with receiver operating characteristic curves. The following 1,548 patients were used to evaluate calibration using the Pearson chi-square goodness-of-fit test.
The areas under the receiver operating characteristic curves for mortality and morbidity were 0.801 and 0.721, respectively, with the CARE score rating by the investigators; 0.786 and 0.710, respectively, with the CARE score rating by the attending anesthesiologists (n = 8); 0.808 and 0.726, respectively, with the Parsonnet index; 0.782 and 0.697, respectively, with the Tuman index; 0.770 and 0.724 with the Tu index, respectively. All risk models had acceptable calibration in predicting mortality and morbidity, except for the Parsonnet classification, which failed calibration for morbidity (P = 0.026).
The CARE score performs as well as multifactorial risk indexes for outcome prediction in cardiac surgery. Cardiac anesthesiologists can integrate this score in their practice and predict patient outcome with acceptable accuracy.
THE growing interest in risk-adjusted analysis of outcome in cardiac surgery has led to the development and validation of several predictive models for postoperative mortality, morbidity, and prolonged hospital stay. 1–15Most models are multifactorial risk indexes developed using multiple regression analysis. Despite their potential usefulness for quality assurance and perioperative care planning, multifactorial risk indexes remain poorly integrated into clinical practice. This is probably because of their complexity in use, their inaccuracy in predicting outcome for individual patients, and their dependence on clinical variables that are not always available. 10–12,16In contrast to multifactorial risk indexes, functional classifications like the New York Heart Association classification or the American Society of Anesthesiologists (ASA) physical status classification are routinely used by anesthesiologists. However, those classifications are not designed to predict outcome after cardiac surgery. Consequently, their predictive ability in this setting is limited and inconsistent. 17
Previous studies in cardiac surgery have demonstrated that a large amount of prognostic information can be obtained from a few clinical variables 18,19or clinical judgment alone. 17,20,21On that basis, we developed the Cardiac Anesthesia Risk Evaluation (CARE) score, which is a simple risk classification with an ordinal scale (table 1). The CARE score combines clinical judgment and the recognition of three risk factors previously identified by multifactorial risk indexes: comorbid conditions categorized as controlled or uncontrolled, the surgical complexity, and the urgency of the procedure.
In this study, we hypothesized that the CARE score would be a valid predictor of outcome after cardiac surgery and that clinicians would easily integrate this risk model into their practice. Accordingly, the study had three specific objectives: first, to determine the predictive performance of the CARE score in predicting mortality and major postoperative complications; second, to compare the predictive performance of the CARE score with that of three existing multifactorial risk indexes 1–3for cardiac surgical patients; and finally, to determine the interrater variability and predictive performance of the CARE score when used by experienced cardiac anesthesiologists.
This was a prospective observational study approved by the Human Research Ethics Committee of the University of Ottawa Heart Institute, Ottawa, Ontario, Canada. Written consents were not obtained from the individual patients because the study was based on data collected for routine care. A total of 3,548 consecutive patients who underwent a cardiac surgical procedure at the University of Ottawa Heart Institute were included. The first 2,000 patients, who had surgery between November 12, 1996 and March 18, 1998, served as a reference group to develop logistic regression models for each risk model tested in the study. The next 1,548 patients, operated on between March 19, 1998 and April 2, 1999, were used for validation of the risk models. Patients undergoing heart transplantation or implantation of ventricular assist devices as a primary surgery were excluded because of the infrequency of those procedures. Patients who underwent more than one cardiac surgical procedure during the same hospitalization were counted as single cases. However, subsequent cardiac or noncardiac procedures were computed as postoperative complications, unless planned before the primary cardiac intervention.
Development of the Cardiac Anesthesia Risk Evaluation Score
To facilitate its use by anesthesiologists, the CARE score was designed to resemble the ASA physical status classification, a model also familiar to surgeons. The rationale behind the definition of each risk category of the CARE score is based on general and accepted knowledge in cardiac surgery (table 1). For example, the definitions of the first category (CARE 1) and last category (CARE 5) are based on the fact that clinicians can correctly identify very low- and very high-risk cardiac surgical patients. 17,21
In contrast to the assessment of the two risk extremes in cardiac surgery, the subjective estimation of patients with intermediate risk is often inaccurate and inconsistent. 21The use of a few objective clinical variables results in better risk prediction for those patients. 18,19,22,23Therefore, two general, but objective, groups of risk factors were selected to define the intermediate risk levels in the CARE score (CARE 2–4): the complexity of the surgery and the presence of various comorbid conditions, which may be categorized as controlled or uncontrolled. The ranking or relative importance of those covariates in the CARE score is consistent with the findings from most existing multifactorial risk indexes. 1–8Patients with controlled medical problems (e.g. , diabetes mellitus, hypertension, etc.) are at greater risk than patients without any disease. However, they are at lower risk than patients with uncontrolled diseases (heart failure with pulmonary edema, renal insufficiency, etc.), thus, the rationale for the CARE 2 and 3 categories. Uncontrolled comorbid factors and complex or difficult procedures have comparable scores in most multiple logistic regression models. Thus, the same prognostic weight was given to both groups of factors in the CARE score. This explains why one or the other group can be used to define the CARE 3 category. The CARE 4 category accounts for the fact that uncontrolled medical conditions and complex procedures have an additive effect on risk. 1–8
Finally, special consideration is given to emergency in the CARE score, so that emergency cases can be easily differentiated from others. This is because emergencies or catastrophic states are the most important predictors of outcome in cardiac surgery. 1–8Therefore, the CARE score has eight possible risk categories: scores 1–5 for elective or urgent procedures, and scores 3E, 4E, and 5E for emergency conditions requiring immediate surgery. Unstable cardiac conditions requiring surgery within 24 h, but not immediately, are considered uncontrolled medical problems. By definition, emergency never applies to CARE 1 or 2.
Preoperative and intraoperative data were collected prospectively by the attending anesthesiologists, who completed a database form at the time of surgery. The database form contains 130 preoperative variables pertaining to the severity of the patient’s disease and comorbid factors before the operation and 80 variables documenting intraoperative procedures and events. Research assistants who also looked after the Cardiac Surgical Unit database verified the accuracy and completeness of the collected data on a daily basis. Postoperative outcome data were retrieved from the medical charts after patients’ discharge and from the Cardiac Surgical Unit database, which contains 92 variables related to postoperative evolution. The quality of the collected data was assessed by an independent observer, who extracted 50% of the database information from 175 randomly selected patients and compared them with those found in the charts. An agreement rate of 98% was found between the database information and the data obtained from the charts.
The primary outcomes in this study were in-hospital mortality, regardless of length of stay (LOS), and morbidity, defined as one or more of the following: (1) cardiovascular—low cardiac output, hypotension, or both treated with intraaortic balloon pump, with two or more intravenous inotropes or vasopressors for more than 24 h, or with both, malignant arrhythmia (asystole and ventricular tachycardia or fibrillation) requiring cardiopulmonary resuscitation, antiarrhythmia therapy, or automatic cardiodefibrillator implantation; (2) respiratory—mechanical ventilation for more than 48 h, tracheostomy, reintubation; (3) neurologic—focal brain injury with permanent functional deficit, irreversible encephalopathy; (4) renal—acute renal failure requiring dialysis; (5) infectious—septic shock with positive blood cultures, deep sternal or leg wound infection requiring intravenous antibiotics, surgical debridement, or both; (6) other—any surgery or invasive procedure necessary to treat a postoperative adverse event associated with the initial cardiac surgery.
In the absence of morbidity data, prolonged postoperative LOS in hospital has been used as a surrogate for morbidity in other studies. 3,10This was, therefore, analyzed as a secondary outcome in this study. Prolonged postoperative LOS in the hospital was defined as a stay of 14 days or more. This corresponds to the 90th percentiles for postoperative LOS in the entire study population. This cutoff point was proposed in a previous study because it likely reflects a LOS resulting from complications rather than differences in discharge practice. 3
Risk Classification of All Patients
Using the validated preoperative database information, two investigators (JYD and SG) gave a CARE score to each patient. Throughout the study, the investigators used strict definitions of uncontrolled medical problems and complex surgical procedures, as presented in the footnotes of table 1. Multifactorial risk scores were also determined for each patient according to the risk indexes developed for general cardiac surgical populations by Parsonnet et al. , 1Tuman et al. , 2and Tu et al. 3(table 2). Those classifications contain variables available in most of our patients, and like the CARE score, they apply to all cardiac surgical patients, not only to those undergoing coronary artery surgery. Patients were risk stratified according to the original criteria and definitions described in each of those risk classifications. 1–3To attenuate the reported inconsistency associated with two subjective risk factors (catastrophic states and rare circumstances) in the original Parsonnet classification, 1660 conditions potentially computable under those factors were listed and given a risk value at the beginning of the study. This list of conditions and scores was used consistently by the investigators to compute Parsonnet risk score, similar to what was done by Gabrielle et al. 9
Use of the Cardiac Anesthesia Risk Evaluation Score by Clinicians
Eight experienced cardiac anesthesiologists participated in the study. They were asked to provide a CARE score for all their patients, before surgery. The investigators’ ratings served as a reference for comparison with both the anesthesiologists’ ratings and the multifactorial risk indexes.
The association between the patients’ characteristics and mortality or morbidity was determined by univariate analysis, using a chi-square test or a Fisher exact test when appropriate. For each risk index considered in this study, including the CARE score, different predictive models for mortality, morbidity, and prolonged postoperative LOS were developed using logistic regression analysis. The CARE score categories were coded as 1 = 1, 2 = 2, 3 = 3, 4 = 3E, 5 = 4, 6 = 4E, 7 = 5, and 8 = 5E; those numeric scores were used as independent variables in the logistic regression models, because the models do not handle nominal scores such as 3E, 4E, and 5E. In the cases of the multifactorial risk indexes, logistic regression models were similarly developed, using the original risk categories (not the total score or integer) proposed by the developers of those indexes. The predictive performances of all predictive models were assessed by determining their discrimination and calibration for mortality, morbidity, and prolonged LOS.
Discrimination, or predictive accuracy, was assessed for all predictive models by building receiver operating characteristic (ROC) curves for mortality, morbidity, and prolonged postoperative LOS. 24The ROC curve is a graphic technique plotting the true-positive rate (sensitivity) versus the false-positive rate (1-specificity) for diagnostic tests, using different cutoff points (in this study, those points were the various categories of each risk classification). The top of the y-axis represents a perfect test with a 100% true-positive rate and a 0% false-positive rate. The area under the ROC curve equals the probability of correctly identifying the patient with a complication when applying the risk classification to a pair of randomly selected patients (always one with a complication and one without a complication) on successive trials. Thus, the area under the ROC curve is commonly used to measure and compare the predictive accuracy of risk classifications. An area under the ROC curve of 1.0 indicates perfect accuracy of the risk classification, whereas an area less than 0.5 (line of no discrimination) means that it is no better than chance. Areas of 0.5 to 0.7 suggest a low accuracy and values more than 0.7 confirm the usefulness of the risk classification as a risk predictor. 24In this study, the area under the various ROC curves and their standard error (SE) were measured and compared by performing the two-tailed nonparametric ROC analysis of DeLong et al. , 25using the statistical program AccuROC for Windows 95 (Accumetric Corporation, Montreal, Canada), with correction made for multiple comparisons.
All risk indexes in the study, including the CARE score, were tested as categorical. Data were tabulated in contingency tables and calibration, which represents the precision of the probabilities generated by a prediction model, was assessed using the Pearson chi-square goodness-of-fit test. This is the most commonly used statistic for contingency tables. 26It compares the estimated predicted outcomes (mortality, morbidity, or prolonged LOS) from the logistic regression models with the observed outcomes for each risk category of the prediction model. 26,27A small chi-square value (or a P value > 0.05) indicates acceptable calibration.
The interrater variability in using the CARE score was determined by measuring the concordance rate and the κ measure of agreement between the attending anesthesiologists’ and the investigators’ assessments. This analysis was first performed with the data from the entire population. It was then repeated with the data from the reference and validation groups separately, to determine any possible change with use (learning effect) of the CARE score over time. The discrimination and calibration analyses were also performed for the CARE score ratings by the attending anesthesiologists. To confirm the usefulness and uniqueness of the CARE score, its discrimination in predicting outcome was compared with that of other variables commonly used by clinicians: ASA physical status, New York Heart Association classification for heart failure, left ventricular ejection fraction, age, serum creatinine, operative priority, and type of surgery.
For all analyses and comparisons, a P value less than 0.05 was used to determine statistical significance.
The patients’ characteristics in the reference group (n = 2,000) and their association with mortality and morbidity are presented in table 3. The mortality and morbidity rates in the reference group were 3.4% and 20.7%, respectively. Comparable patients’ characteristics, mortality (3.4%), and morbidity (22.2%) rates were found in the validation group (n = 1548). The mortality and morbidity rates were comparable between the eight cardiac surgeons who participated in the study. The mean postoperative LOS was 8.8 ± 11.0 days in the reference population and 9.0 ± 10.3 days in the validation group, with a median of 6 days in both groups. The incidences of prolonged postoperative LOS were 10.2% and 12.3% in the reference and validation groups, respectively. The mortality rate in this study compares very well with values recently obtained through large multicenter databases. 28,29The morbidity rate and the incidence of prolonged postoperative LOS are more difficult to compare with those from other studies because of variations in outcome definitions.
The Cardiac Anesthesia Risk Evaluation Score as a Predictive Risk Model
Table 4shows the probabilities of mortality, morbidity, and prolonged postoperative LOS associated with each category of the CARE score, as determined from logistic regression analysis in the reference population.
Predictive Performance of the Risk Classifications
In the reference group, all risk classifications had comparable areas under the ROC curves for the prediction of mortality and morbidity (figs. 1 and 2). With all risk classifications, the discrimination for mortality was significantly better than for morbidity. Age being a risk factor not explicitly taken into account by the CARE score, the discrimination of the CARE score in predicting mortality and morbidity was further tested in various age subgroups. The areas under the ROC curve for the prediction of mortality and morbidity were 0.791 ± 0.067 and 0.740 ± 0.024 in the patients younger than 65 yr of age, 0.763 ± 0.045 and 0.721 ± 0.026 in the patients 65–74 yr of age, and 0.795 ± 0.049 and 0.715 ± 0.031 in the patients 75 yr of age or older, respectively. For prolonged postoperative LOS, the area under the ROC curve was 0.715 ± 0.018 with the CARE score, 0.774 ± 0.016 with the Parsonnet classification (P = 0.05 vs. all other classifications), 0.730 ± 0.019 with the Tuman classification, and 0.730 ± 0.018 with the Tu classification.
In the validation group, the areas under the ROC curve for the prediction of mortality and morbidity were 0.807 ± 0.031 and 0.721 ± 0.016 with the CARE score, 0.804 ±0.026 and 0.698 ± 0.017 with the Parsonnet classification, 0.823 ± 0.030 and 0.699 ± 0.017 with the Tuman classification, and 0.801 ± 0.032 and 0.688 ± 0.017 with the Tu classification, respectively. A significant difference was found between the CARE score and the Tu classification in predicting morbidity (P = 0.029). For the prediction of prolonged postoperative LOS, the area under the ROC curve was 0.728 ± 0.020 with the CARE score, 0.769 ± 0.017 with the Parsonnet classification (P < 0.05 vs. all other classifications), 0.720 ± 0.020 with the Tuman classification, and 0.741 ± 0.018 with the Tu classification.
The calibration analysis for mortality in the validation group showed an acceptable fit between the observed and expected values for all risk classifications (tables 5–8). For morbidity, an acceptable level of agreement between the observed and expected values was found for the CARE score, the Tuman, and the Tu classifications, but not for the Parsonnet classification (tables 5–8). For prolonged postoperative LOS, all classifications failed the calibration analysis:P = 0.014 with the CARE score, P = 0.036 with the Parsonnet classification, P = 0.012 with the Tuman classification, and P = 0.026 with the Tu classification.
Use of the Cardiac Anesthesia Risk Evaluation Score by Clinicians
An overall concordance rate of 85.1% in CARE score ratings was found between the investigators and the eight participating cardiac anesthesiologists, with a κ value of 0.790 (SE = 0.008;P < 0.001). In the reference group, a concordance rate of 86.3% and a κκalue of 0.806 (SE = 0.011;P < 0.001) were found between the two ratings. A comparable concordance rate of 83.6% and a κ value of 0.770 (SE = 0.013;P < 0.001) were found in the validation group, suggesting no significant change over time in the anesthesiologists’ use of the CARE score.
The CARE score used by the attending anesthesiologists had areas under the ROC curve of 0.782 ± 0.028 for mortality, 0.710 ± 0.016 for morbidity, and 0.715 ± 0.018 for prolonged postoperative LOS in the reference group. In the validation group, the areas under the ROC curves were 0.789 ± 0.031 for mortality, 0.721 ± 0.016 for morbidity, and 0.710 ± 0.019 for postoperative LOS. Those values were not significantly different from those obtained from the CARE score used by the two investigators. The Pearson goodness-of-fit test for the CARE score used by clinicians showed an acceptable fit between the observed and expected rates of mortality (chi-square = 3.056;df = 8;P = 0.931) and morbidity (chi-square = 14.174;df = 8;P = 0.077), but not for prolonged postoperative LOS (P = 0.045).
When compared with other clinical variables used by clinicians in the reference population, the CARE score predicted mortality and morbidity significantly better than any of those markers alone (table 9), confirming its uniqueness as a clinical tool for risk assessment and classification of cardiac surgical patients.
The results of this study show that the CARE score is an accurate predictor of mortality and morbidity after cardiac surgery. Its discrimination and calibration for the prediction of mortality and morbidity compare very well with those of more complex multifactorial risk indexes. The study also suggests that experienced cardiac anesthesiologists can integrate the CARE score into their clinical practice in a consistent manner that will provide accurate predictions of outcome after cardiac surgery.
Many mathematical modeling techniques are available to quantify the risk associated with cardiac surgery. Of those techniques, multiple regression analysis is probably the most commonly used. 1–8With this statistic, independent risk factors are identified and entered in a complex equation that expresses the probability of adverse outcomes. For practical reasons, continuous data derived from this equation are converted to a multifactorial risk index, where the predictive value of each risk factor is reduced to an integer, and the sum of the integers determines the risk category to which individual patients belong. In general, the discrimination provided by those risk indexes, as determined by the area under the ROC curve, is in the range of 0.65–0.85 for mortality, morbidity, or increased postoperative LOS in hos-pital. 3–5,7,9–13,15So, most existing multifactorial risk indexes meet the acceptability criteria for risk-adjusted analysis of outcomes in cardiac surgery.
Despite their apparent simplicity, multifactorial risk indexes are difficult to memorize. For example, the Tu classification, 3which is the simplest model tested in this study, counts six risk factors and 17 point strata from which nine risk categories can be derived. This is an obvious handicap for daily application in the practice of cardiac anesthesia and surgery. The CARE score is a proposed alternative to this cumbersome approach. It is a simple risk ranking system designed for routine use by cardiac anesthesiologists and surgeons. It allows some clinical judgment within a framework of general concepts related to risk in cardiac surgery. This approach may become appealing to clinicians, especially if shown to be as good a predictor as more complicated multifactorial risk indexes.
In this study, the CARE score was submitted to the usual evaluation criteria for predictive risk models in two consecutive cohorts of patients, a reference and a validation group. Two investigators gave a CARE score to each patient and determined their risk category according to the multifactorial indexes developed by Parsonnet et al. , 1Tuman et al. , 2and Tu et al. 3The CARE score predicted postoperative mortality or morbidity with as much accuracy as the multifactorial risk indexes. In fact, only the Parsonnet classification (which is the most complex model tested in this study) and the CARE score predicted mortality with areas under the ROC curve equal to or more than 0.80 in both cohorts of patients. Furthermore, the CARE score is the only model that provided areas under the ROC curve more than 0.70 for the prediction of morbidity in the two groups of patients. It also provided an acceptable level of agreement between the observed and expected rates of mortality and morbidity in the validation set of patients. For those two outcomes, the fit was not perfect for all the CARE score categories, but none of the multifactorial indexes had a perfect fit for all its categories. In the case of the CARE score, poor fits were observed with the 3E and 4E categories, where the predicted mortality was underestimated, and with the 3E category, where the predicted morbidity was also underestimated. The small number of patients and expected outcome in those categories may explain those results. 30
A common finding to all the risk indexes tested in this study was their lower accuracy in predicting morbidity or prolonged postoperative LOS as compared with mortality. This difference has also been observed in previous studies. 3,5Those results suggest that some complications defining morbidity and some causes of prolonged postoperative LOS are not well accounted for by the risk factors used by the CARE score or the tested multifactorial risk indices. Another common finding to the CARE score and the tested multifactorial risk indexes was their poor calibration for the prediction of prolonged postoperative LOS. The reason for this lack of calibration is unclear. The mean and median postoperative LOS were comparable between the reference and the validation groups, suggesting that a significant change in practice pattern between the reference and validation groups is unlikely to be the cause of poor calibration for this outcome.
One major objective of this study was to determine the performance of the CARE score when used by cardiac anesthesiologists in their daily practice. As previously observed with the ASA physical status classification, 31,32differences in CARE score ratings were expected between the attending anesthesiologists and the investigators. However, a very high agreement between the two ratings was observed, with an overall concordance rate of 85% for the 3,548 studied patients. Consequently, clinicians predicted mortality, morbidity, and prolonged postoperative LOS with almost as much accuracy as the investigators. Because only two ratings were obtained for each patient in this study, the whole range of variation in CARE score rating remains undetermined. However, the overall results suggest that the participating anesthesiologists can use the CARE score in a manner that is consistent enough to provide appropriate risk-adjusted analysis of outcome in their institution.
The subjective risk assessment of cardiac surgical patients, using the equivalent of a visual analog score, is another simple alternative to multivariate risk analyses. This approach was recently tested and compared with a multivariate risk model in 1,198 patients from seven centers. All patients were given a subjective risk score of 1–5 by their attending surgeon. This subjective risk model provided an area under the ROC curve of 0.70 for the prediction of mortality. This was significantly less than the 0.76 value obtained with the multivariate risk model in the same population. The subjective risk assessment was accurate in identifying the very low- and very high-risk patients, but inaccurate for those with intermediate risk. In the present study, the CARE score predicted mortality as well as any of the tested multivariate risk models. In addition, the CARE score was useful in predicting outcome for patients with intermediate risk levels. Thus, the CARE score may have certain advantages over a purely subjective risk model. Only comparisons of both methods of risk assessment in the same population could determine the true difference between the two methods.
Following public reporting of cardiac surgical outcomes over the last decade, 33–35patient risk stratification has been suggested to avoid unfair comparisons between individual cardiac surgeons and institutions. 1,36,37In this context, a risk classification like the CARE score may be discredited because it allows some rater’s subjectivity, which may facilitate risk overrating to obtain better risk-adjusted outcome results. This phenomenon, previously called gaming , 38is also possible with multifactorial risk indexes when certain risk factors are unavailable before surgery (e.g. , left ventricular ejection fraction, pulmonary artery pressure) or when their definition is influenced by practice patterns (e.g. , urgency, emergency, use of intraaortic balloon pump) or the technique of measurement (e.g. , pulmonary artery pressure, ejection fraction). 16,38Furthermore, the use of multifactorial risk indexes usually involve data collection, entry in computers, and individual risk calculation by clerks or research assistants. The potential errors in each step of this process may add distortions to risk calculation. Because no predictive model is perfect, there will always be a risk of making mistakes by analyzing and comparing risk-adjusted outcomes to determine quality of care. 39Recognizing this limitation, all characteristics of the various models must be considered before selecting one that suits an institution or department needs for risk calculation. This study does demonstrate that a model does not have to be complex and free of clinicians’ subjective input to provide reliable risk prediction.
One major limitation of this study is that it was performed in a single institution. External validation of the CARE score in other centers will be necessary to confirm its predictive accuracy. Another limitation is the fact that it was tested among a small group of experienced cardiac anesthesiologists only. The predictive performance of the CARE score may be different when used by residents, cardiac surgeons, or intensivists. However, looking at table 1and the operational criteria in its footnotes, it seems that a CARE score can be assigned to most patients with minimal clinical judgment and experience. The research assistant participating in this study (SG) had no major difficulties using the score. This suggests that the accuracy of the CARE score would not be altered significantly by the experience of its user. Further studies will be required to confirm this hypothesis.
Currently, multifactorial risk indexes are used mainly by professional and government bodies that produce annual reports on risk-adjusted outcome 1 or 2 yr after the data collection. Many clinicians feel distant from that process, possibly because of its complexity and financial and time requirements. The CARE score is a model proposed to facilitate data collection and interpretation by busy clinicians involved in cardiac surgery. It can easily be added to patient discharge summaries and used by medical record departments to produce frequent risk-adjusted mortality reports. If shown to be an accurate risk predictor in other institutions, the CARE score may be appealing to many clinicians, mainly because it keeps the whole process of risk evaluation at a clinical level.
The authors thank Mrs. Geraldine Wells, Research Division, Department of Anesthesia, University of Ottawa Heart Institute, Ottawa, Ontario, Canada, for her help in preparing this manuscript and Ioulia Doumkina M.D., for reviewing patient charts and assessing the quality of database information.