Sesh Mudumbai, MD, MS, Committee on Electronic Media and Information Technology, and Assistant Professor, Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine; Medical Director for VISN 21 Clinical Informatics Systems and Perioperative Analytics, VA; and Staff Anesthesiologist, VA Palo Alto Health Care System.

Sesh Mudumbai, MD, MS, Committee on Electronic Media and Information Technology, and Assistant Professor, Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine; Medical Director for VISN 21 Clinical Informatics Systems and Perioperative Analytics, VA; and Staff Anesthesiologist, VA Palo Alto Health Care System.

Mark Banoub, MD, MBA, CPE, FASA, FAAPL, Committee on Electronic Media and Information Technology, and Senior Staff Anesthesiologist, Henry Ford Medical Group, Michigan.

Mark Banoub, MD, MBA, CPE, FASA, FAAPL, Committee on Electronic Media and Information Technology, and Senior Staff Anesthesiologist, Henry Ford Medical Group, Michigan.

Ori Gottlieb, MD, FASA, Committee on Electronic Media and Information Technology, and Medical Director of Informatics and Associate Professor of Anesthesiology, Pritzker School of Medicine, University of Chicago. He is President, Illinois Society of Anesthesiologists.

Ori Gottlieb, MD, FASA, Committee on Electronic Media and Information Technology, and Medical Director of Informatics and Associate Professor of Anesthesiology, Pritzker School of Medicine, University of Chicago. He is President, Illinois Society of Anesthesiologists.

Samir Kendale, MD, FASA, Committee on Electronic Media and Information Technology, and Assistant Professor of Anesthesiology, NYU Langone Health, New York, New York.

Samir Kendale, MD, FASA, Committee on Electronic Media and Information Technology, and Assistant Professor of Anesthesiology, NYU Langone Health, New York, New York.

Scenario: A 74-year-old African American woman presents for a preoperative evaluation in preparation for a Whipple procedure. Her past medical history includes pancreatic cancer and mild dementia. You fire up your newly installed decision support system (DSS), which is the latest in a series of tools developed by Fancy Artificial Intelligence (AI) Corporation. It boasts human-like conversation capabilities that combine genomic and electronic health record data gathered from 100,000 patients with state-of-the-art algorithms that deliver recommendations within seconds. The program recommends that the patient should not receive surgery. Over the next several months, you hear that a group of hospitals is planning to remove the AI program from their system. Numerous patients are also threatening to sue because of seemingly arbitrary recommendations to deny surgery. Faced with a potential lawsuit, the company discloses that their data has been biased, apologizes, undergoes “algorithm sensitivity training,” and removes the biased recommendations.

Statements about how AI is going to change the world and make us all either superhuman or replace us seem to occur daily. The first wave of desired health care applications includes revolutionizing how we diagnose patients and medication management. AI is currently leveraged to evaluate mammograms for breast cancer and pathology specimens. As with any tool that is used for good, ethically questionable AI uses have surfaced, including judicial sentencing and hiring discrimination (Amazon) and surveillance with facial recognition (asamonitor.pub/38Wz8Kq)

A starting point to evaluate these competing concerns is with a good working definition. Almost 70 years ago, the field's founding document, “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence,” defined AI as “the science of making machines do things that would require intelligence if done by people” (asamonitor.pub/3gX3Eqf). Substantial progress in making intelligent machines since then has occurred, particularly with the advent of high-end computing resources and data storage. A key part of AI is machine learning: the development of computer programs that can access data and use it to learn for themselves.

Current state of AI in health care and the OR

Most organizations are still in the early implementation phases of using AI and machine learning to improve operational performance and patient outcomes:

  • AI was successfully used to assign propensity-to-pay risk scores and to improve revenue cycle management (asamonitor.pub/3ftFseU). Computer-assisted coding uses AI to mine clinical notes in addition to structured data in the electronic medical record to capture the correct diagnoses and exact CPT procedure codes that improve billing capture and regulatory compliance.

  • Radiology and ophthalmology are two specialties noted for early AI adoption. A deep neural network application approach enables computer analysis of images and results in expert-level diagnosis of chest X-rays, tumor detection in mammograms and diabetic-retinopathy (Nat Med 2019;25:44-56).

  • Machine learning techniques have been developed and tested in predicting hypotension during anesthesia from high-fidelity arterial line waveforms (Anesthesiology 2018;129:663-74).

Future AI development

Current development efforts provide a guide to what a health care future might look like with AI. Tools for routine perioperative and ICU patient care are part of a more distant future, but O.R. throughput management, documentation and quality reporting could arrive sooner.

Clinical applications will likely fall into several broad categories, including:

  • Planning: ASA Physical Status Classification and preoperative risk assessment; predictive algorithms for early sepsis detection and clinical deterioration warnings.

  • Intraoperative event prediction and management: Hypotension or bradycardia; depth of anesthesia and EEG processing; closed-loop control of anesthesia delivery; closed-loop vasopressor administration; assisting in the performance of ultrasound-based procedures; response to opioid therapy.

  • Longer-term prediction: Mortality or morbidity (i.e., acute kidney injury); emergency room admissions; hospital readmissions.

  • Efficiency and operations: Optimizing O.R. workflow by providing data-driven recommendations about which patients to prioritize; hospital staff scheduling (e.g., properly forecasting inpatient or ED surges).

Addressing ethical and medicolegal concerns

There is mounting concern by key stakeholders, including from physicians, developers and regulators, that while AI could deliver enormous benefits to health care, substantial ethical and medicolegal concerns exist (JAMA November 2019; JAMA October 2019). As with our first fictional scenario, biased data can lead to biased models and biased AI recommendations. Bias, fairness, equity, and inclusivity must all be evaluated from the start to limit the chance that unintentional harm could occur and prevent disillusion with the technology. Several corporations and health care organizations, including the American College of Radiology and the American Medical Association, have already developed ethical development of AI technology guidelines.

Medicolegal challenges are significant as well. Who might be liable for AI-assisted decision-making if an adverse event occurs? Ultimately, who is responsible for the decision or action suggested by AI software? Our Whipple procedure example above is one of countless possible scenarios – though physicians embrace the Hippocratic Oath (to “do no harm or injustice”), how might medicolegal constraints apply to AI software that assists in a life-or-death decision, if at all?

Of course, there will initially be a significant period of direct human input and supervision as there is currently for self-driving cars, but it may be on a continuum determined by complexity, criticality or both. For example, with the advent of AI tools, a predictive model result suggests that a patient is a high risk for postoperative complication and should be further stratified by a human for preoperative optimization. This relatively low-risk intervention could eventually be fully automated by AI software without human input. Conceivably, an optimization plan could also be offered for human consideration.

Take another scenario with both ethical and medicolegal implications. As a first step, for intraoperative event management, an algorithm has been developed that may initially suggest impending hypotension, relying on the clinician to intervene if appropriate. Over time, the AI may suggest an intervention, which the clinician could either follow or disregard. Ultimately, a fully developed AI would detect an event, intervene independent of human input and notify a human, placing the onus on the human to interrupt the process in case the intervention is improper.

One can see how critical the intermediate steps of AI development are in these cases. These steps also raise the possibility for unintentional harm if ethical and medicolegal consequences are not anticipated from the start of development.

Next steps

Many believe health care AI may cause fatal errors and will not meet currently hyped expectations. AI development processes must include the application of fairness and transparency standards. Data sourcing must adhere to data privacy, confidentiality and data protection requirements. Legal experts believe that in some areas, sector-specific revisions of the law should be adopted especially with non-discrimination and product liability of AI technologies (Philos Trans A Math Phys Eng Sci 2018;376:20170360).

Before fully relying on AI, the algorithms and patient impact require rigorous, incremental evaluation by physician scientists and clinicians. Many will attempt to lead guideline definition, and ASA will be among those expected to develop broad guidelines for members to follow as AI is incorporated into practice. Alternatively, regulatory and legal entities may establish guidelines for us. Regardless, physician engagement will be crucial for effective policy and guidance creation in this rapidly expanding field.