The authors hypothesized that a multiparameter intraoperative decision support system with real-time visualizations may improve processes of care and outcomes.
Electronic health record data were retrospectively compared over a 6-yr period across three groups: experimental cases, in which the decision support system was used for 75% or more of the case at sole discretion of the providers; parallel controls (system used 74% or less); and historical controls before system implementation. Inclusion criteria were adults under general anesthesia, advanced medical disease, case duration of 60 min or longer, and length of stay of two days or more. The process measures were avoidance of intraoperative hypotension, ventilator tidal volume greater than 10 ml/kg, and crystalloid administration (ml · kg–1 · h–1). The secondary outcome measures were myocardial injury, acute kidney injury, mortality, length of hospital stay, and encounter charges.
A total of 26,769 patients were evaluated: 7,954 experimental cases, 10,933 parallel controls, and 7,882 historical controls. Comparing experimental cases to parallel controls with propensity score adjustment, the data demonstrated the following medians, interquartile ranges, and effect sizes: hypotension 1 (0 to 5) versus 1 (0 to 5) min, P < 0.001, beta = –0.19; crystalloid administration 5.88 ml · kg–1 · h–1 (4.18 to 8.18) versus 6.17 (4.32 to 8.79), P < 0.001, beta = –0.03; tidal volume greater than 10 ml/kg 28% versus 37%, P < 0.001, adjusted odds ratio 0.65 (0.53 to 0.80); encounter charges $65,770 ($41,237 to $123,869) versus $69,373 ($42,101 to $132,817), P < 0.001, beta = –0.003. The secondary clinical outcome measures were not significantly affected.
The use of an intraoperative decision support system was associated with improved process measures, but not postoperative clinical outcomes.
The extent to which intraoperative decision support systems guide care and improve outcomes remains unclear.
The authors compared a novel decision support system to a historical control group and to a matched (nonrandomized) contemporaneous control group.
Most improvements were time-dependent. Decision support was associated with improved process-of-care measures compared to contemporaneous control patients, but not with improved clinical outcomes.
Decision support systems should be formally evaluated because the extent to which they will enhance patient care is not obvious.
WIDE variation in surgical outcomes persist across hospitals and countries despite decades of research and quality improvement efforts.1,2 Strategies such as team training and checklists, first demonstrated to decrease adverse events in aerospace and aviation, have now been incorporated into health care.3,4 In addition, the electronic health record (EHR), novel diagnostic technology, and minimally invasive surgical options should theoretically improve patient outcomes. However, the advent of new data streams has created a new challenge: “alarm fatigue,” which may result in harm due to the failure to recognize actionable patient deterioration.5
Concurrently, observational studies of large databases and prospective trials have found associations between intraoperative physiologic management and postoperative outcomes.6–8 Specifically, reduced episodes of intraoperative hypotension, improved ventilation management, tighter glucose control, and more restrictive fluid management have all been associated with improved postoperative outcomes.6–17 Management of these interventions in the hyperacute intraoperative setting requires a single anesthesiology provider to integrate second-to-second changes in more than 40 distinct physiologic parameters in the context of specific patient comorbidities and the dynamic surgical insult itself. However, there are mixed data regarding the value of alert systems in demonstrating a measurable change in clinical decision-making or patient outcomes.4,14,18,19 Unsuccessful trials have relied on basic alphanumeric paging and EHR-based screen pop-ups, and have been limited to blood pressure or brain function monitoring data.4,18 There are promising results for improved management of hyperglycemia.14,19
It has been suggested that anesthesia adopt tactics established in the aviation industry, such as multifunction displays and decision support systems, which integrate across the disparate data sources, devices, and contexts to highlight and recommend specific interventions.4,19–21 An intraoperative multiparameter decision support system with novel real-time visualizations for anesthesia care has received US Food and Drug Administration (FDA) clearance.22 With real-time data extracted from physiologic monitors, EHRs, and peer-reviewed literature transformed into a readily understandable schematic view of organ systems, the system integrates multiple streams of data and knowledge into a single user interface (fig. 1).22 We sought to determine whether implementation of an integrated decision support display was associated with: (1) improved adherence to care processes; and (2) reduced postoperative complications and mortality.
Materials and Methods
Institutional Review Board approval was obtained for this retrospective analysis (University of Michigan, Ann Arbor, Michigan); patient consent was waived because no direct patient identifiers were collected or used during the conduct of this study. The study protocol was reviewed, approved, and registered by the Department of Anesthesiology’s Anesthesia Clinical Research Committee before data extraction or analysis. The patient inclusion, exclusion, primary outcome, and proposed statistical analysis plan were prespecified and included in this protocol. The study is reported in consultation with the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) recommendations.23
AlertWatch OR (AlertWatch; Ann Arbor, Michigan), a FDA-cleared, multifunction decision support system, is composed of real-time data extracted from physiologic monitors and EHR data displayed in a readily identifiable schematic view of organ systems (fig. 1). The system extracts, analyzes, and presents more than 250 pieces of information, providing a “live” organ system view with a beating heart, expanding aortic arch, and ventilating lungs.18,22 The display is color coded to indicate normal range (green), borderline abnormal range (yellow), and abnormal range (red) for the data related to each organ system or lab value. Organs or systems with underlying comorbidities are outlined in orange (fig. 1). The display also has 48 digital text alerts and two audible alerts (both related to blood pressure). The text alerts, which appear in the upper right alert section of the display, come in three hierarchies: black text for general information, red text for important information, and scrolling red text for information that should be addressed immediately (fig. 1). There are multiple calculations conducted in the background for clinical purposes.18,22 The system provides an audible alert when the mean arterial pressure drops below 55 mmHg (a triple beep with a decreasing tone), the aortic arch turns red, and a red text scrolling alert is displayed in the upper right alert section of the display, noting the actual mean pressure (fig. 1). The aortic arch turns yellow as a warning when the mean arterial pressure decreases to 60 mmHg or less. Similar alerts are included for intraoperative tidal volume in ml/kg ideal body weight and fluid balance (incorporating fasting duration, insensible losses, fluid administration blood loss; fig. 2).
Patient Population and Study Groups
The analysis was divided into two periods; a historical control period defined as the 22-month period before the initiation of the decision support system (July 1, 2010 to April 30, 2012) and a parallel control period for when the decision support system was available (July 1, 2012 to June 30, 2016). The two months between May 1, 2012 and June 30, 2012 were an implementation transition period while supplemental screens were installed on the anesthesia workstations to display the decision support system. Patients aged 18 yr or older with advanced comorbid medical disease (defined as American Society of Anesthesiologists Physical Status III or IV documentation) undergoing non–liver transplant surgical procedures requiring general anesthesia with a duration greater than 1 hr and postoperative hospital length of stay of two days or more were included. Because cardiac, thoracic, and vascular procedures are performed at a distinct facility, these patients were not included. Patients with a preoperative mean arterial pressure less than 55 mmHg were excluded. A few intrathoracic procedures performed as part of multispecialty surgery were performed at the study location and included in the analytic dataset.
The treatment group was defined as the decision support system being viewed for 75% or more of the duration of the case. The use of this Web-based display was at the sole discretion of the anesthesia providers. In the primary intention-to-treat analysis, a case in which the decision support system was used less than 75% of case duration was considered a parallel control. Historical controls were cases before implementation of the system. These groups were retrospectively assigned during the analysis phase of the study.
The coprimary outcomes were three process measures: hypotension (the number of minutes the patient experienced a mean arterial pressure below 55 mmHg),7 inappropriate ventilation (intraoperative mechanical ventilation with a median tidal volume greater than 10 ml/kg ideal body weight for patients with an ideal body weight of 50 kg or less), and fluid resuscitation rate (crystalloid infused in ml · kg–1 · h–1 in patients who had an estimated blood loss of less than 500 ml and did not receive a blood transfusion). We chose to compare fluid resuscitation in patients with less than 500 ml blood loss because of the high degree of variability in accuracy of estimated blood loss, especially in cases with large blood loss.24,25 The inappropriate ventilation patient population was chosen because previous literature showed that female patients, or those with short stature, remain at risk for excessively large intraoperative tidal volumes, and our default ventilator tidal volume setting (500 ml) is appropriate for patients greater than 50 kg.26
The secondary clinical outcome measures were myocardial injury after noncardiac surgery (defined as a postoperative troponin greater than or equal to 0.3 µg/l [upper limit of normal = 0.3 µg/l]) or postoperative acute kidney injury (AKI), defined as Kidney Disease: Improving Global Outcomes criteria stage 1 or 2, within 7 days after surgery.27 Patients with preoperative creatinine greater than 3 mg/dl, or patients missing creatinine, were excluded from the AKI analysis. In addition, hospital length of stay in days, hospital charges, and 30-day all-cause in-hospital mortality were compared. Hospital charges were not specified in the original protocol as the data were not available at the time of proposal review; they were included as a secondary outcome.
Three distinct analyses were performed: a combined analysis of AlertWatch versus historical control and parallel controls together; AlertWatch versus parallel controls; and AlertWatch versus historical controls. Covariates considered for risk and selection bias adjustment are delineated in table 1. For analysis of hypotension, inappropriate ventilation, fluid resuscitation, myocardial injury, and AKI, we used male sex, age in decades of life (reference age group: 18 to 30 yr), World Health Organization body mass index categories (reference group: normal body mass index), log-transformed surgical duration in minutes, inexperienced provider (defined as clinical anesthesia first-year resident), and in-room provider type (certified registered nurse anesthetist vs. resident), surgical urgency (emergency vs. nonemergency), and individual clinical comorbidities as covariates. Each clinical comorbidity was identified by extracting clinical and administrative diagnoses from our EHR and enterprise research data warehouse. They were grouped into specific comorbidities (e.g., diabetes with complications vs. diabetes without complications) using previously published and validated Elixhauser comorbidity definitions for use with International Classification of Diseases, Ninth and Tenth Revision.28 Surgical duration was log-transformed due to nonnormal distribution. In addition, we incorporated procedure-specific risk using a categorical variable of the anesthesiology base Current Procedural Terminology (CPT) code. Each anesthesia base CPT code (275 distinct codes) was collapsed into one of 18 distinct groups, which reflect similar procedural invasiveness and body area. For example, a colostomy takedown with re-anastomosis versus pancreatectomy would each be represented in a different categorical risk group in our analysis (Supplemental Digital Content 1, http://links.lww.com/ALN/B588, a table listing all CPT categories used). For 30-day mortality and length of stay, the clinical comorbidities and CPT category covariates were replaced by the Risk Stratification Index,29 which has been validated for comparison across patients and hospitals for these outcomes. The Risk Stratification Index incorporates procedure- and patient-specific risk using discharge procedure and diagnoses codes, and is derived from a national dataset of more than 35 million patients.29
For the parallel control analysis, we used a two-stage modeling process. First, the aforementioned covariates were used to derive a propensity score that predicted the use of AlertWatch. We assumed that the decision to use AlertWatch was not distributed randomly across cases, and some underlying variation in patient or procedural risk was likely present. This propensity score reflects the probability that a patient would be provided care using AlertWatch and improves the ability to address provider selection bias in choosing to use AlertWatch for more or less complex patients and procedures. Next, this propensity score was used as a covariate in a multivariable model combined with AlertWatch use to model the dependent outcome variables. Use of a propensity score via covariate adjustment helps to address underlying selection bias when evaluating the impact of nonrandomly assigned treatments, such as the use of AlertWatch.30
For the historical control and overall combined analyses, a propensity score is not applicable because all historical control patients have zero probability of AlertWatch use; the decision support tool was not available for use before its implementation. In lieu of a propensity score, the covariates described above and delineated in table 1 were used individually in multivariable modeling. Across all analyses, a generalized linear model using Poisson distribution was employed for continuous outcomes and a logistic regression model was used for dichotomous outcomes. C-statistics and Akaike information criterion were used to evaluate the discriminating capacity of logistic and linear models, respectively. Bonferroni correction for multiple testing of three coprimary outcomes was performed to establish the statistical significance threshold: P = 0.05 / 3 = 0.0167.
In a post hoc sensitivity analysis, calendar year was included as a covariate in the parallel control analysis to assess the possible impact of “learning” over time, as AlertWatch-based clinical guidelines impacted the care of patients even if AlertWatch was not being used.18 In addition, post hoc sensitivity analyses of three distinct provider subgroups was performed: certified registered nurse anesthetists, residents, and clinical anesthesia first-year residents only. Finally, a post hoc sensitivity analysis was performed to evaluate whether the definition of “AlertWatch usage” as 75% or greater of case duration was affected by variant thresholds (greater than or equal to 60%, 70%, 80%, or 90%) or “AlertWatch parallel control” as 0% of case duration usage.
A total of 26,769 patients were included in the final dataset: 7,954 AlertWatch cases, 10,933 parallel controls, and 7,882 historical controls (fig. 3). Overall, the studied populations demonstrated a high comorbidity burden (table 1) and relatively high rates of postoperative troponin (12% or greater for all groups) and creatinine measurement (94% or greater for all groups), consistent with their high risk (table 2).
Robust multivariable adjustment demonstrated that in both the overall combined analysis (across all three groups) and parallel control analysis (using propensity score covariate adjustment), all three coprimary outcome process measures demonstrated statistically significant improvement (tables 2 and 3). C-statistics for the combined analysis multivariable logistic regressions varied from 0.70 (stage 1 AKI) to 0.85 (mortality), indicating satisfactory model discriminating capacity, and consistent with previously published literature such as the Risk Stratification Index.29 Hypotension demonstrated a statistically significantly lower risk-adjusted duration for AlertWatch cases than parallel and historical control (beta coefficient –0.29; P < 0.001; 95% CI, –0.30 to –0.27; table 2), although the median duration was clinically similar across all groups (1 to 2 min). AlertWatch patients demonstrated a statistically and clinically significant lower percentage of cases where the median tidal volume was greater than 10 ml/kg ideal body weight (28% AlertWatch vs. 37% parallel control and 57% historical control [adjusted odds ratio 0.39; P < 0.001; 95% CI, 0.32 to 0.47]) and lower median crystalloid ml · kg–1 · h–1 weight (5.88 ml · kg–1 · h–1 AlertWatch vs. 6.17 parallel control and 7.40 historical control [beta coefficient –0.09; P < 0.001; 95% CI, –0.10 to –0.07]; table 2).
In the overall combined analysis (table 2), AlertWatch use was found to be an independent predictor and protective against postoperative myocardial injury (adjusted odds ratio 0.68; P < 0.001; 95% CI, 0.55 to 0.84) and associated with shorter hospital length of stay (beta coefficient –0.05; P < 0.001; 95% CI, –0.06 to –0.04), but did not demonstrate statistically significant improvement in stage 1 or 2 AKI or mortality (table 2). When compared to historical controls (table 3), AlertWatch use was found to be an independent predictor and protective against the secondary outcomes of postoperative myocardial injury (adjusted odds ratio 0.54; 95% CI, 0.42 to 0.69), stage 1 AKI (adjusted odds ratio 0.86; 95% CI, 0.78 to 0.94), stage 2 AKI (adjusted odds ratio 0.77; 95% CI, 0.63 to 0.95), mortality (adjusted odds ratio 0.77; 95% CI, 0.61 to 0.98), and hospital length of stay (beta –0.12; 95% CI, –0.13 to –0.11; table 3). However, when compared to parallel controls using propensity score covariate adjustment, AlertWatch no longer demonstrated a significant impact on postoperative myocardial injury, AKI, mortality, or hospital length of stay (table 3). The median encounter charges for parallel controls was $69,373 (25th to 75th percentile, $42,101 to $132,817), while the median encounter charges for AlertWatch cases was $65,770 ($41,237 to $123,869), a statistically significant decrease (beta coefficient –0.003; P < 0.001; 95% CI, –0.003 to –0.003).
A post hoc sensitivity analysis incorporating calendar year of procedure into the propensity score adjusted parallel controls analysis demonstrated an attenuation of all process and outcome measure effects, with only minutes of hypotension retaining significance. Additional post hoc sensitivity analyses comparing AlertWatch cases to parallel controls (propensity score adjusted), evaluating each in-room provider type as a subgroup analysis, demonstrated results similar to the primary analysis. For certified registered nurse anesthetists or residents overall, process-of-care measures were reliably improved compared to historical and parallel controls with statistical significance and similar effect sizes. However, for clinical anesthesia first-year residents only, only hypotension met the Bonferroni-corrected statistical significance threshold of P < 0.0167 (Supplemental Digital Content 2 [http://links.lww.com/ALN/B589], 3 [http://links.lww.com/ALN/B590], and 4 [http://links.lww.com/ALN/B591], tables listing analysis specific to these provider subgroups). A post hoc sensitivity analysis evaluating different AlertWatch usage definitions—greater than or equal to 60%, 70%, 80%, or 90% of case duration—demonstrated the same process and outcome measure differences to be statistically significantly associated with AlertWatch usage as the primary analysis definition (greater than or equal to 75% of case duration). A final post hoc sensitivity analysis defining “parallel control” as cases with AlertWatch used for 0% of the cases demonstrated that use of AlertWatch was associated with not only improvements in hypotension, lower fluid administration, improved tidal volume management, and lower encounter charges, but also shorter median length of stay (Supplemental Digital Content 5, http://links.lww.com/ALN/B592, a table describing the results of this sensitivity analysis).
Our implementation of a novel decision support system, including real-time visualizations, was associated with a risk-adjusted improvement in process-of-care measures among high-risk patients undergoing major inpatient surgery. We did not observe an effect on postoperative clinical outcomes or length of hospital stay, although a slight decrease in encounter charges was noted. The impact upon studied processes of care (hypotension, inappropriate ventilation, fluid resuscitation) is resilient across multiple analyses: comparison to combined historical and parallel controls and propensity score adjusted parallel controls. Previous efforts at modeling healthcare improvement efforts in the mold of other high-risk, data-driven industries, such as aviation, have typically disappointed. The current analysis provides encouraging data regarding the potential for intraoperative anesthesiology decision support tools integrating EHR and real-time physiologic data in the hyperacute operating room environment.
The aviation industry has been a leader in applying technology to improve quality and safety.31 Starting in the 1970s, a new method of integrating the increasingly complex flight data into a display was developed, referred to as the “glass cockpit.”32 The purpose of this display was to present information to the pilot in a more usable format, enabling rapid interpretation while flying the aircraft, especially during acute situations. The adoption of the “glass cockpit” was associated with a significant improvement in safety. The intraoperative anesthesiologist’s “cockpit” and development of safety monitoring technology has paralleled that of the aviation industry; starting with advanced monitoring standards, emergency protocols, team training, high-fidelity simulators, and checklists.3,33,34
The analyses of the data do not demonstrate a reliable association between use of the decision support tool and postoperative outcomes, although a small decrease in encounter charges was noted in the primary analysis, as well as both encounter charges and length of stay in a sensitivity analysis. Encounter charges in the parallel control analysis (propensity score adjusted) do demonstrate a statistically significant decrease in costs associated with AlertWatch use ($65,770 [25th to 75th percentile, $41,237 to $123,869] AlertWatch use vs. $69,373 [$42,101 to $132,817] parallel controls; P < 0.001; beta coefficient –0.003). However, no other clinical or resource utilization outcomes studied were found to be statistically different between AlertWatch and parallel controls. The overall combined analysis and historical control analysis did note differences in some postoperative outcomes (tables 2 and 3), but the validity of this observation is questionable given the inherent limitations of before-and-after analyses and the absence of significance in the parallel control analysis. The impact of underlying practice change over time was also highlighted by the fact that the effect size of any process-of-care improvement was attenuated when compared to parallel controls, rather than to historical controls (table 3).
It is also unclear why improvements in process-of-care measures did not yield measurable improvements in the studied postoperative outcomes of AKI, myocardial injury, or mortality. Although intraoperative hypotension, inappropriate lung ventilation, and excessive fluid administration are all associated with postoperative complications, the multitude of care processes (preoperative, intraoperative, and postoperative) not impacted by an intraoperative decision support system may have overwhelmed any possible beneficial value to impacting these three processes of care. It is also possible that postoperative outcomes that were not evaluated in this study, such as pulmonary complications or surgical site infections, were impacted, but not observed as part of this analysis. Previous literature has demonstrated the value of decision support systems to improve not only glycemic management processes of care, but also rates of surgical site infections.14,19
Other efforts have failed to demonstrate the value of automated alerts to prevent hypotension.4,18 Panjasawatwong et al. investigated the use of a visual and paging alert for hypotension in a prospective randomized trial.4 This study of more than 3,000 patients had disappointing results; the alert did not statistically significantly improve the management of blood pressure or reduce hospital length of stay.4 McCormick et al. recently evaluated an alert for low blood pressure and low bispectral index. Although they were able to observe a small decrease in duration of so-called “double-low,” this effect waned during the conduct of the study and they did not observe a difference in the primary outcome, 90-day mortality.18 Unlike the system implemented by these institutions, the decision support system evaluated in the current study includes alerts designed to provide warning before crossing critical thresholds. For example, the display provides a primary visual alert when blood pressure drops into the “yellow range” (mean arterial pressure less than 60 mmHg) by having the aortic arch turn yellow, thereby giving the clinician a warning that they are approaching a critical blood pressure range of 55 mmHg. This may have caused providers to treat hypotension more aggressively. Although the absolute, unadjusted minutes of mean arterial pressure less than 55 mmHg were clinically similar across groups, multivariable risk-adjusted analysis showed a statistically significant impact (table 2). The clinical significance of this finding, and any associated impact at other blood pressure levels, warrants further study.
A protective lung ventilation bundle that includes the use of intraoperative tidal volumes less than 10 ml/kg ideal body weight has been demonstrated to decrease postoperative pulmonary complications in both operative and critical care literature, although some controversy remains.11,35,36 The decision support system determines if the tidal volume is greater than 10 ml/kg ideal body weight. If so, a text alert recommends a change to a specific tidal volume range that corresponds to 6 to 8 ml/kg, ideal body weight.11 The control groups had a greater percent of cases that had tidal volume out of the recommended range, especially in the historical controls. Any tactic that may contribute to a decreased incidence of postoperative pulmonary complications would have a significant impact on costs and length of stay.37
Restrictive fluid administration during the intraoperative period is a nearly universal element of modern enhanced recovery after surgery protocols.13 Although controversy exists regarding the value of “goal-directed” fluid therapy and the definition of “restrictive” versus “liberal” thresholds, epidemiologic, meta-analysis, and randomized controlled trial data continue to demonstrate that our historical fluid administration practices were likely excessive and that lower fluid balance may be associated with improved postoperative bowel function, wound healing, and pulmonary function across surgical specialties.15–17,38 It is interesting to note that the AlertWatch group demonstrated not only less crystalloid in ml · kg–1 · h–1 in both the historical and control comparison, but also lower variability of fluid resuscitation. We feel this implies tighter crystalloid control by targeting the amount of crystalloid based on the constant presentation of the input and output calculation and/or the invasive objective measures of systolic pressure variation and central venous pressure (table 2; figs. 1 and 2).
There are several significant limitations to this study. First, it is an observational study; therefore, by definition, it cannot prove cause and effect, only associations. Second, the two control groups in this study, a historical control and a parallel control, each have limitations. The historical control cannot control for the improvement of care over time, while the parallel control group has the limitation of selection bias. It is possible that individual practitioners’ experience levels may have affected the results; we mitigated this bias by including experience level in the risk adjustment. In addition, due to within-provider cluster size limitations, we were unable to employ a multilevel fixed and random effects model to control for individual provider effects. Importantly, our results and conclusions are limited to a specific subset of patients: those with advanced medical disease undergoing major surgical procedures requiring general anesthesia and inpatient care. Next, we were unable to collect reliable pulmonary complication data given the nuances of a pneumonia, atelectasis, or pulmonary edema diagnosis. In addition, although the AlertWatch exposure group was defined as the use of the decision support application for 75% of case duration, it is entirely possible that the clinician was ignoring the application alerts, or had it covered by another clinical application, rendering the visual alerts hidden; this would bias toward the null hypothesis of no impact on process of care or postoperative outcome measures. Another limitation of this observational study is that the data for troponin and creatinine are only available for patients from whom those values were collected. It is possible that there were many patients who had organ injury that were missed. We feel this limitation would cause the data to underestimate the true incidence of organ injury rate in both the treatment and control groups in a similar fashion, and therefore would not affect the conclusions of the study. Next, the post hoc sensitivity analysis incorporating the year of procedure demonstrates that provider learning and underlying practice improvement may eventually supersede the incremental impact of decision support system use.
Overall, the results of this study demonstrate that a decision support system employed during anesthesia care is reliably associated with an improvement in multiple process measures. Given the increasing availability of patient care data, promulgation of practice guidelines, and the need to implement those guidelines acutely at the bedside in patients with dynamic clinical status, the current trend of more devices with auditory, static high/low threshold alarms may not be the only answer.
All funding was from the Department of Anesthesiology, University of Michigan Medical School, Ann Arbor, Michigan.
Dr. Tremper is the founder of, and has an equity interest in, AlertWatch (Ann Arbor, Michigan), the company that developed the decision support system being evaluated.