“[What are] the implications of advocating different risk algorithms on the decision for further diagnostic evaluation and potential interventional strategies[?]”
THE preoperative evaluation of the patient has been a key function of anesthesiologists. The focus of the evaluation has been on identifying baseline comorbidities that will lead to modification of perioperative care. With respect to specific medical conditions, the evaluation of the patient with cardiac disease was one of the earliest focuses. Risk stratification indices have been published for more than 40 yr since the original paper by Goldman et al.1 in 1977. In this issue of the journal, Glance et al.2 discuss the implications of advocating different risk algorithms on the decision for further diagnostic evaluation and potential interventional strategies.
In the area of preoperative evaluation, most guidelines have used a Bayesian approach. Bayes’s theorem states that the probability of an event is based on previous knowledge of conditions that might be related to the event.3 Essentially, comorbidities identified by history are used to calculate a baseline probability of a perioperative event that can then be incorporated into the decision to perform testing. The American College of Cardiology/American Heart Association has incorporated several risk indices as potential starting points (previous probabilities) in the 2014 Perioperative Guidelines,4 and determining the agreement between the three different indices is the aim of the article by Glance et al. They compared the Revised Cardiac Risk Index, American College of Surgeons–National Surgical Quality Improvement Project Risk Calculator, and Gupta Myocardial Infarction or Cardiac Arrest Risk Index and found that the three prediction models disagreed 29% of the time on which patients were low risk.
When developing a risk index, there is the issue of balancing complexity and completeness of the included clinical conditions as well as determining the outcome of interest used to build the model. For example, including more clinical or laboratory variables in the construction of the model could increase the sensitivity and specificity for a given outcome but may also increase the complexity of the data collection and calculation. The original Cardiac Risk Index as well as the Revised Cardiac Risk Index were created before the routine use of electronic medical records, and manual data entry and calculation were necessary. With the development of apps on smartphones, it is easier to perform these calculations, but again the number of variables that need to be hand-entered affects the efficiency of the preoperative encounter. With the development of the electronic medical record, algorithms can be included that automatically calculate the probabilities based upon the stored data, and even larger numbers of variables can be used to calculate the risk score.
A second issue is the ability of any model to predict one defined outcome in contrast to a number of outcomes. For example, the number of variables needed to predict a perioperative cardiac event may be fewer than the number of nonoverlapping variables required if multiple outcomes are assessed such as both cardiac and respiratory complications as well as length of stay like the American College of Surgeons Risk Calculator.5
Glance et al. demonstrate the implications of these trade-offs in their excellent manuscript, using real patient data from those in the National Surgical Quality Improvement Project from the American College of Surgeons. In their study, they sampled the National Surgical Quality Improvement Project database to evaluate those patients who might be considered for further testing based upon the guidelines. Most importantly, they use the Bayesian approach to ask who should not undergo further testing, because additional questions (i.e., exercise capacity) must be answered after calculating a baseline risk to determine the potential need for further diagnostic testing. Their finding that the Revised Cardiac Risk Index has poor agreement with the other two risk indices developed using the National Surgical Quality Improvement Project data set is not surprising given the simplicity of the Revised Cardiac Risk Index algorithm as well as the small patient population upon which it is based. Additionally, because the Revised Cardiac Risk Index was developed earlier than the other two indices, some of these variables may become less important with changes in perioperative management and recognition of their importance.
The article by Glance et al. leaves us with the question of whether we should no longer be using the Revised Cardiac Risk Index in our decision process for further diagnostic testing and risk identification. Clearly, the American College of Surgeons Risk Calculator would be an ideal index to assess multiple outcomes, but the burden of manually entering the data is significant for routine use. If the coefficients are made public for incorporation into the electronic medical record, then the value compared with burden may change. The Myocardial Infarction or Cardiac Arrest Risk Calculator offers the advantage of easier complications but can only be used to calculate outcomes for the specific events of myocardial infarction or cardiac arrest.
So, where does this leave us with respect to preoperative evaluation and the decision to perform further testing or provide more accurate informed consent? A key question is the outcome Glance et al. used for their paper: the previous probability of an event. They could not use the database to determine how that previous probability could be used in the Bayesian framework of using exercise tolerance and further testing decisions to improve patient outcomes. Additionally, the Myocardial Infarction or Cardiac Arrest and American College of Surgeons Risk Calculator have not been validated for the intended use in the guidelines, and the committee chose to advocate the use of a risk index but did not believe that there was sufficient evidence to advocate one specific index. In a similar fashion, the best assessment of exercise capacity had not been validated, although the recent publication of the Measurement of Exercise Tolerance before Surgery trial demonstrated the value of the burden of an objective assessment of exercise capacity using the Duke Activity Specific Index compared with subjective physician assessment.6 Of note, that study did not demonstrate the incremental value of the additional burden of cardiopulmonary exercise testing for predicting cardiac risk. Therefore, we now have additional information on the best methods to assess the previous probabilities in the American College of Cardiology/American Heart Association Preoperative testing algorithm.
In summary, Glance et al. have made an important contribution to the perioperative literature in defining the potential unintended consequences of using different risk indices to establish a previous probability of perioperative adverse cardiac events. Although their study supports the concept that a best-in-class risk model is useful, incorporating these indices into care paradigms such as guideline algorithms must use the value equation of practical use and burden and how that results in the final outcome of interest. As more evidence becomes available to demonstrate that a specific risk calculator used in clinical decision results in a better outcome, then future guidelines should clearly identify it in the recommendations. However, until that time, this author believes that the guideline committee, which I had the privilege to chair, made an appropriate recommendation.
Dr. Fleisher was the Chair of the 2014 American College of Cardiology/American Heart Association Guideline on Perioperative Cardiovascular Evaluation and Management of Patients Undergoing Noncardiac Surgery: A Report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines.