Automated medical technology is becoming an integral part of routine anesthetic practice. Automated technologies can improve patient safety, but may create new workflows with potentially surprising adverse consequences and cognitive errors that must be addressed before these technologies are adopted into clinical practice. Industries such as aviation and nuclear power have developed techniques to mitigate the unintended consequences of automation, including automation bias, skill loss, and system failures. In order to maximize the benefits of automated technology, clinicians should receive training in human–system interaction including topics such as vigilance, management of system failures, and maintaining manual skills. Medical device manufacturers now evaluate usability of equipment using the principles of human performance and should be encouraged to develop comprehensive training materials that describe possible system failures. Additional research in human–system interaction can improve the ways in which automated medical devices communicate with clinicians. These steps will ensure that medical practitioners can effectively use these new devices while being ready to assume manual control when necessary and prepare us for a future that includes automated health care.
Advanced medical technology is routinely used in the practice of medicine and automated medical devices are beginning to appear in the clinical environment. At first glance, automated systems are a powerful tool that physicians can use to prevent human error and improve patient care. Computers do not become fatigued or distracted and they recall information almost instantaneously. For example, closed-loop drug delivery systems may be safer and more accurate than physicians who administer drugs by hand.1,2 Electronic health record systems provide automated sepsis alerts and clinical recommendations based upon laboratory values and vital sign measurements. Recent studies describe robot-assisted intubation,3 robotic transesophageal echocardiography,4 and experimental oral surgery that may be performed without human intervention in the future.5 These advances portend a future in which patient care will be largely automated, with humans supervising autonomous medical devices.
The adoption of automated systems in the clinical environment poses challenges that must be addressed in order to maintain and improve patient safety. Medical devices have become increasingly complex and can fail in unexpected ways. Systems such as physiologic monitors, anesthesia machines, and intensive care unit (ICU) ventilators are controlled by software that is designed by biomedical engineers but managed by the clinician. The introduction of automated medical devices may cause previously unanticipated changes in workflow as the clinician is now expected to monitor for failures instead of performing the task him- or herself. Skill at performing a task manually will atrophy as practitioners become reliant upon an automated system; as a result, manual skills may diminish in humans who rely heavily on automation to manage system failures.6 In this article, we will discuss how automated medical technology affects the practice of anesthesia. We will also describe how other industries have dealt with the unintended consequences of imperfect automation, including under- or overreliance, degradation of manual skills, loss of trust in the automation, and management of system failures.7 We will then offer practical advice on how to use this understanding to improve patient safety.
Unintended Consequences of Automation: An Example from Commercial Aviation
The recent Boeing 737 Max 8 (The Boeing Company, USA) mishaps are perhaps the most recent example of how the malfunction of a new automated system combined with inadequate training can lead to a tragedy. The Max 8 had a tendency to pitch up when power was applied because its larger engines were mounted farther forward than had been necessary in previous models. A software workaround (Maneuvering Characteristics Augmentation System) was created that would automatically lower the nose if the airplane appeared to be getting dangerously slow. This critical information was provided to the software by a single sensor that is prone to failure. Pilots who transitioned to this airplane were not informed about the aerodynamic problem with the airplane or the software that was designed to address it. Still, the airplane was certified and entered commercial service. During two flights (Lion Air Flight 610 and Ethiopian Airlines Flight 302), failure of the sensor caused the system to abruptly pitch the airplane down shortly after takeoff. The pilots recognized the problem with the flight path, but were not able to trace the failure to the Maneuvering Characteristics Augmentation System automation in time. Both airplanes crashed, causing the deaths of 346 people.
The high level of safety in commercial aviation depends partly on pilots being trained to recognize and respond to problems with automation. In these instances, however, the unique circumstances of the sensor failure, combined with pilots’ unfamiliarity with the system, led to disaster. During its investigation, the U.S. National Transportation Safety Board discovered significant flaws in the design process, the evaluation of the technology, and pilot training. At present, the entire fleet remains grounded.8 This is just one example of how errors in design, implementation, and training can result in the catastrophic failure of automated systems.
Advanced Medical Technology and Automation in Medicine
The Anesthesia Patient Safety Foundation has defined advanced medical technology as “medical devices and software systems that are complex, provide critical patient data, or that directly implement pharmacologic or life-support processes whereby inadvertent misuse or use error could present a known probability of patient harm.”9 Advanced medical technology that includes automation can make clinical care more efficient and improve patient safety because machines can accomplish many tasks more efficiently than humans. Machines never become bored or tired, nor are they biased or delayed by emotional responses to a critical event. Machines may be more specific and sensitive than humans in detecting subtle changes in a patient’s status.10
Perhaps the most obvious example of how medical technology has evolved in the practice of anesthesia is the anesthesia machine itself. Anesthesia machines began as a simple means of delivering a mixture of compressed gases and volatile anesthetics to the patient. Typically, a simple bellows ventilator might have been driven by compressed oxygen, regulated by valves that were electrically or pneumatically triggered. The correct direction of the flow of gases in the circle breathing system was ensured by mechanical one-way valves, visible under clear plastic caps. The simplicity of the machine and the fact that all its working parts could be seen were considered to be important components of its safety.11
As the anesthesia machine has become more complex, however, many of its formerly visible components have been hidden or replicated with software. The Draeger Perseus A500 (Draeger, Inc., USA) is an example of the latest generation of anesthesia workstations. It uses an electronically-controlled turbine to ventilate the patient. The system is controlled by the clinician with a touch screen interface that has several layers of menus. Because the turbine is nearly silent, artificial breath sounds are generated and played through a loudspeaker to provide auditory feedback to the clinician. Most of the components, including the one-way valves, are hidden from the user. Although this design confers many advantages, including advanced modes of ventilation, some aspects of the system’s operation may be difficult for the user to understand. This can possibly make it more difficult for the clinician to recognize and troubleshoot malfunctions in the machine.
Automation is defined as a machine that either carries out or augments a function that was previously performed by a human.12 Automated systems can be significantly faster and more efficient than humans at many tasks. Automation has already become part of daily medical practice and will only become more prevalent in the future. Ventilators, medication administration systems (i.e., infusion pumps), and diagnostic equipment employ varying levels of automation.13,14 One recent report describes a deep learning algorithm that is better able to predict the response to propofol and remifentanil infusions than established pharmacokinetic models.15 Closed-loop control of anesthetic agents, fluids, and ventilation has been found to produce better neurocognitive outcomes than manual control of anesthesia.16 Another recently published study demonstrated that computers can triage chest radiographs in real time.17 These developments are only the first examples of how advances in technology will expand the potential for intelligent medical devices. As sensors and algorithms become more sophisticated, it is possible that machines may one day be able to evaluate and treat patients and perform procedures autonomously while under human supervision.
Introduction of automated medical devices will not eliminate the need for human operators, but the nature of their work may be changed in ways that are difficult to anticipate.12 Automation does not necessarily make a process more reliable; it may replace operator error with design error.18 Clinicians must therefore trust automated systems to perform correctly while maintaining vigilance for rare but potentially catastrophic failures. Anesthesia professionals who use these systems can improve patient safety by understanding how automated systems work and the unique challenges imposed by working with machines that use sophisticated algorithms to perform patient care. For example, operators in industries such as transportation and nuclear power routinely train for system failures (when a device stops working or hands operation back to the user) or automation surprises (in which a device takes an action that is unexpected by the user). Few devices that employ automated medical technology currently approach the high levels of automation seen in other industries, but any medical device can fail in unexpected ways, requiring the clinician to quickly intervene to prevent a poor outcome. Training for clinicians should include techniques similar to those used in the aviation and nuclear power industries and can be guided by the already established science of human systems integration.
Automation and Human Performance
The introduction of automated medical technology into clinical practice introduces the potential for errors that can be caused by the device, the clinician, or the human–machine interface. In addition to system malfunctions, failures of medical devices can be caused by several factors: the user may have programed the system incorrectly; the process may have been designed incorrectly; or a component within the device may fail. Whenever a person interacts with a highly automated system, the original workflow is replaced by new tasks that include supervising the device and troubleshooting associated problems. These new skillsets must be acquired through training.19 For example, nearly all passenger airplanes can be flown by the autopilot from shortly after takeoff through approach and landing, causing the role of airline pilots to evolve into that of a system supervisor.20–24 In addition to learning hand-flying skills, pilots must now be trained to monitor the automated systems and to take over control if necessary.25 A similar evolution in the skills of healthcare providers may be necessary with the widespread adoption of automated medical technology.
One example of how human error, malfunction of an automated system, and skill atrophy can interact to cause a disaster is the Air France Flight 447 crash. While flying through an area of thunderstorms, one of the airplane’s sensors became covered with ice, causing a malfunction of the airplane’s airspeed indicator. As a result, the autopilot unexpectedly handed control back to the flight crew. The flight management system reverted to a mode called “alternate law” in which many of the protections built into the flight management software are disabled. The flight crew was unprepared to hand-fly the airplane and no longer understood which functions were automated and which required manual control. The crew had not been trained to manage a partial automation failure combined with an airspeed well below the normal bounds of operation. Their hand-flying skills were rusty from disuse. Confusing messages on the electronic centralized aircraft monitor further impaired the flight crew’s ability to regain control of the aircraft, as did confusion as to who was actually flying the airplane.26 As a result, a flyable airplane crashed into the Atlantic Ocean, killing everyone aboard.
Well-designed systems can improve patient care when used by clinicians who have been properly trained. For example, computerized provider order entry and clinical decision support systems commonly offer automated medication recommendations. These systems can help to decrease medication prescribing errors and reduce mortality in the ICU.27 Commercially available closed-loop ventilators automatically adjust parameters in order to ventilate patients with lower pressure, volume, and Fio2 than conventional ventilators.13 Unintended consequences and new errors can, however, arise from the implementation of automated systems. These may be related to design flaws, new workflows, inadequate training, or problems with the human–system interface. In one example, a recent study on computerized provider order entry prescribing errors in elderly patients found that the majority (96%) were due to human–machine interactions.28
Interaction with an automated system is affected by a clinician’s experience and confidence in his or her skills. Clinicians who have less experience and confidence in their task may overly rely on automation, whereas those with more experience may choose to bypass the automated technology altogether. Less experienced physicians are more likely to change their decisions based on automated prompts when using computerized decision support systems.29
Varying levels of automation complicate the human–machine interface. Few automated systems offer a specific choice in which either the human or the machine exclusively performs a given task; many systems use an intermediate level of automation. For example, a self-driving car could recommend a change in the planned route, but the human driver would be required to acknowledge that road conditions were appropriate before the car would change direction. In medicine, an electronic health record can offer a drug recommendation, but it then requires that the clinician choose whether to accept it. Removing an operator completely from an automated task decreases performance recovery when an automated system fails.30 An intermediate level of automation maintains operator involvement, may improve situation awareness, and can reduce the risk of performance impairment. Incorporating this intermediate level of automation in medical equipment may therefore improve safety. The user interface should ideally be designed to help the operator to maintain situation awareness in order to detect failures early and facilitate troubleshooting.
The electronic health record is one example of how automated medical technology has been introduced into clinical practice. It has improved many aspects of patient care, increasing the legibility of records, providing clinical decision support, and allowing patient data to be aggregated for research. Poorly designed or implemented electronic health records can, however, be a source of distraction, increased workload, and error.31,32 Some studies suggest that the alerts provided by electronic health records may not improve care. For example, at least one systematic review failed to find evidence that sepsis alerts improved measures of treatment.33 Large numbers of alerts with low positive predictive value, especially if they are for low-stakes problems, cause alert fatigue and may be ignored by practitioners.34 Alerts from electronic health records related to laboratory results, medication refills, and other reminders cause an increase in workload and may be irrelevant in the operating room. Repeated alerts, especially for the same patient, cause a decrease in physician response and increase the number of overrides.34 In contrast, overreliance upon a clinical decision support system increases the risks of both failing to detect prescribing errors and accepting false-positive alerts (thus prescribing the wrong medication).35 Strategies to mitigate these problems are currently under investigation. In order to relieve cognitive stress and burnout in physicians using electronic health records, for example, Gregory et al. recommend protected time for alert management, fewer alerts, and other electronic health record improvements.36 The performance of clinical alerts can be improved by disabling low-stakes alerts and by changing the criteria for an alert to increase its relevancy.37 This suggests that the ability to temporarily disable most electronic health record alerts during critical periods (e.g., while the patient is in the operating room) may improve safety.
Automated systems can lead to skill atrophy, trust failure, system failures, automation surprise, mode confusion, automation bias, and boredom, all of which can impair a clinician’s ability to safely use this technology. The challenges posed by these problems can, however, be mitigated by understanding why they occur and how clinicians can be prepared to manage them.
Overuse of automation may result in a loss of manual skills.38–41 Overuse may be caused by the operator placing high levels of trust in the automated system,42 a tendency to become over-reliant on the system,12 or complacency.43 Although the effects of automation on the loss of clinical skills have yet to be investigated, some authors have expressed concern that surgeons whose practice consists primarily of minimally invasive procedures may lose their ability to convert to an open procedure if necessary.44 Neurology residents may be losing their ability to conduct detailed neurologic examinations because they now rely upon advanced diagnostic imaging to make a diagnosis. As a result, residency training programs in neurology are beginning to discuss inclusion of the physical examination in their curricula so that clinicians can learn and maintain this important skill.45 General surgical residents are also facing this problem: as they perform an increasing number of laparoscopic and robotic procedures, they may ultimately lose the technical skills to safely perform an open surgical procedure.46
An operator who does not trust an automated system will be reluctant to use it.12,47 The mid-air collision of Bashkirian Airlines Flight 2937, a Tupolev Tu-154 (Tupolev, Russia), and DHL Flight 611, a Boeing 757, over Überlingen, a small town near the Swiss border in 2002 was the result of multiple factors, including lack of vigilance by air traffic control, poor staffing, and conflicting regulations. The two aircraft were on a collision course and were about to lose separation, but this was not immediately noticed by the controller responsible for both aircraft. The controller mistakenly commanded the Tupolev to descend while the Traffic Collision Avoidance System in the Tupolev issued a “climb” instruction. The captain, who did not trust the automated system and was possibly unaware that the Boeing crew was receiving a complementary instruction that would have avoided the collision, elected to descend. The two airplanes collided in mid-air, killing 71 people.48
Successful human–machine interaction relies heavily on the automation performing as expected and the operator’s understanding of what the system is doing. A clinician may lose trust in a system that is unreliable and will therefore be less likely to use it. For example, clinicians may silence or disable alarms that have high false positive rates, preventing them from successfully averting unwanted outcomes signified by the few true positives for that alarm.12,49 Similarly, if a clinician does not trust a clinical decision support tool, he or she may ignore or override suggestions, decreasing the benefits that automation can provide in patient care.
In both emergency medical transportation50 and aviation,51–53 poor perceptions can also degrade the user’s trust in the system. Human operators become reluctant to use a system that is plagued by automation failures and false alarms.54–56 Moreover, a failure of one automated system can lead to decreased trust in a similar system that is working properly.57,58 System wide trust failure describes the resultant loss of trust that impairs the human–automation interaction.59–62 As automation is increasingly incorporated into clinical practice, our specialty must take the lead in learning how to effectively use automated clinical systems while encouraging the public to trust new medical technology. This will also include addressing patients’ apprehension when a new medical technology is introduced.
Failure of an automated system may be caused either by malfunction of the device or by operator error and may suddenly increase a clinician’s workload.63 For example, a ventilator used in many ICUs was recently recalled when a life-threatening software malfunction was discovered. Ventilators with the defective software would suddenly stop ventilating the patient and discontinue supplemental oxygen. The error message indicating that this failure had occurred was “panel connection lost.” In order to prevent patient harm, ICU staff were required to recognize this problem immediately and convert to an alternative mode of ventilation.64 Even if the clinician has maintained his or her skills, he or she may become overreliant on an automated device and fail to notice a problem.65 (Fortunately, the aforementioned ventilator sounded a high-priority alarm when it failed.)
The operator’s trust in an automated system may cause him or her to allow an increase in the number of distractions66 or cause undesirable behavioral changes,67–70 such as a willingness to accept greater risk.71,72 For example, a clinician may become distracted while entering data into an electronic health record system and fail to notice an unrelated event such as surgical bleeding. A clinician may also become distracted while programming an infusion pump and inadvertently select an incorrect drug or infusion rate. This may become apparent only when there is a significant change in the patient’s vital signs. Managing a patient who has become unstable while troubleshooting the infusion pump may then lead to task saturation, further impairing the clinician’s ability to solve the underlying problem.
Discrete tasks, such as ventilating a patient at a specific rate and tidal volume, can be performed more efficiently by automated systems than by humans. When a machine performs a task, however, the feedback received by the user is different than when the user performs the task him- or herself.41,73 For example, if the clinician formerly stood next to the patient while providing care but now operates a system from a console that is remote from the patient (as happens during robotic surgery), the fundamental nature of the task has changed.74 The physical location is different and the signs and signals that the clinician receives also change. This can also occur when an anesthesia professional is caring for a patient in a remote location such as a magnetic resonance imaging or radiation therapy suite: The clinician has only the physiologic monitors to rely upon and has no direct access to the patient.
A clinician who is unaware of how a medical device can fail might place too much trust in the system and have difficulty assuming control if necessary. Providing information about what the device is doing and why will allow the clinician to understand how an automated system functions and to detect a degradation in the system’s performance more rapidly. Human–machine interactions can be improved by increasing the transparency of the system, in which the medical device provides information to the user about why it took a specific course of action.75 For example, an ICU ventilator that employs closed loop control might highlight the parameters that it is using to wean a patient. This would improve the clinician’s ability to resume manual control when a system failure occurs.76 This information should be solicited from the device manufacturer or may be gleaned through a careful review of the product manual. All clinicians should receive generalized training in how to manage automation, and before using medical devices in patient care, trainees and experienced physicians alike should receive instruction on how each device can fail and how to manage these malfunctions.
Automation Surprise and Mode Confusion
When an automated system malfunctions, the human operator, who may not have been monitoring the automated system closely, must quickly identify the failure, assess the situation, and resume manual control.38,77 Automation surprise occurs when a machine performs an action that the operator did not expect.78 Mode confusion occurs when the operator does not understand the automated system’s current state, either because of a lapse in supervision of the system or a poor human system interface.79 For example, a ventilator may be set to pressure support mode, but the operator believes that it is delivering synchronized intermittent mandatory ventilation. Mode confusion may prevent the clinician from fully understanding what the device is doing if a critical event occurs. For example, on some anesthesia machines, changing the respiratory rate without adjusting the inspiratory time will change the inspiratory:expiratory ratio. This may in turn cause an abnormal pressure or capnography waveform that the clinician may not expect if he or she is not familiar with this piece of equipment. The clinician’s workload has suddenly increased, possibly causing a startle response, which further impairs his or her performance while searching for the underlying cause.25 The resulting cacophony of alarms and alerts from the ventilator (alarm flood)49 may also increase the clinician’s confusion without helping him or her to better understand the etiology of the problem. Subtle differences in the user interface may also increase the risk of mode confusion. (fig. 1)
Automation surprises and mode confusion may initially be difficult to detect and manage. Depending upon the nature of the malfunction, a human operator may have little time to intervene. The clinician may initially react to the failure with a startle response that may impair his or her ability to manage the problem.25 If the clinician is unable to determine the reason for the malfunction, the best solution may be to revert to the lowest level of automation possible (e.g., manual ventilation), disconnecting the device in question from the patient if possible. Training the clinician in the use of the equipment, its failure modes, and monitoring of the automation may decrease the risk of automation surprise.41,73,78,80–82 Training curricula should be revised to prepare clinicians for system failures by including unpredictability and device malfunctions in simulation training, and by teaching clinicians metacognitive skills.25,83 Such training would help clinicians to engage more effectively with device interfaces, better maintain situation and mode awareness, and restore the automated device to its intended clinical function. Human-systems integration research will facilitate the development of new displays that will help clinicians to understand what an automated medical device is doing and diagnose system failures more quickly.
Physicians who rely too much on an automated system (e.g., a clinical decision support system) may develop automation bias, in which the user prioritizes suggestions from the automated system while disregarding contradictory information from other sources.6 This effect typically occurs when a clinician must accomplish multiple tasks and when manual tasks compete with the automated task for attention, as might happen in a busy clinic or while caring for a sick patient in the ICU. Automation bias may cause errors of commission, in which users implement incorrect recommendations, and omission, in which users fail to recognize a problem because they were not notified by the automated system.84 Automation bias may lead a clinician to over rely on alerts, prescribing medications only when they are suggested by the clinical decision support or computerized provider order entry. Clinicians may also accept automated recommendations for treatment even when they are incorrect. One study concluded that providers commit 58.8% fewer errors when provided with correct computer decision support and 86.6% more errors when this support is incorrect.85 Automation bias seems to be exacerbated by multitasking and by increasing cognitive load. Although no easy solutions currently exist, the best recommendations include reducing workload and distractions (perhaps by asking other personnel to perform noncritical clinical tasks). Device manufacturers and personnel responsible for designing electronic health record user interfaces can decrease the risk by presenting verification information with the clinical recommendation.86
Boredom and Vigilance
The seal of the American Society of Anesthesiologists features the word “vigilance.” Much of anesthesia practice is a vigilance task during which the provider monitors vital signs and the surgical procedure in anticipation of a change in patient status.87 Clinicians must maintain vigilance for extended periods of time in order to detect relatively rare, but critical, events. Sustained attention for long periods of time causes cognitive fatigue; focusing attention on a monotonous task ultimately causes degradation of performance over time.88 In boring environments with a low task load, operators may find other tasks to help maintain some level of attention, and possibly to stay awake.89 Rest breaks and secondary tasks decrease monotony and improve vigilance when used correctly.90 Although vigilance tasks have been historically considered to be unstimulating and not mentally demanding, it is now understood that vigilance tasks produce a high level of subjective workload and cognitive stress.91 Warm et al have suggested that these factors should be considered when designing environments and tasks that require high levels of vigilance.92
As a greater number of tasks become automated, clinicians may become bored and tempted to engage in ancillary activities while caring for a patient. Boredom in the workplace has been reported in occupations including unmanned aerial vehicle operation, train drivers, and commercial flight operations, and has been directly responsible for mishaps and near misses in aviation.93 Operators who function in an environment without manual tasks may experience mind wandering and complacency after a period as short as 20 min.94 Unfortunately, there is no simple solution to alleviating vigilance decrement and boredom.94 To combat boredom in the operating room, physicians may engage in activities unrelated to patient management such as viewing Web sites or engaging in activities on a smart phone.95 These tasks may have the beneficial effect of helping a clinician to maintain some level of attention or (especially late at night) simply to stay awake,89 but may be considered to be unprofessional. One potential solution may be to engage in an active scan (see Recommendations, below) and to remain engaged with the surgical team.
Lessons Learned from Aviation and Autonomous Vehicles
Research on automation spans multiple fields, including aviation, autonomous ground and sea vehicles, and medical robotics. The aviation industry has spent decades focusing on best practices for safety amid an influx of new automation in the cockpit. Given its broad experience with automation, the commercial aviation industry may offer some of the best examples of how to balance technology with safety. These experiences may also be applicable to the adoption of automation in healthcare.
One important lesson that can be drawn from aviation is that clinicians should receive training that will guide their interactions with automated medical devices. Airlines and corporate flight departments incorporate management of automation into their initial and recurrent training. Line-oriented flight training uses scenario-based training to address real-world problems that are likely to occur during flight.96 Each line-oriented flight training scenario forces the pilot to work through a specific problem with human–computer interaction, automation surprise, complacency, or situational awareness.97 Anesthesia professionals may likewise benefit from similar training that incorporates a challenge related to automation in the context of system management, teamwork, and decision making. Additional research can help to develop educational programs, possibly by creating scenarios in which clinicians must manage an automation failure or automation surprise while simultaneously treating an unrelated problem.
The aviation industry in the United States is controlled and monitored by the U.S. Federal Aviation Administration and U.S. National Transportation Safety Board, both of which can rapidly address new threats to safety. The U.S. Federal Aviation Administration has the authority to require compliance with regulations that affect automation in the cockpit. One example is the Traffic Collision Avoidance System. This device is located in each commercial airplane and is a last resort for avoiding a collision.47 If the Traffic Collision Avoidance System system detects that two airplanes are on a collision course, it immediately issues a resolution advisory (e.g., instructing one pilot to descend while simultaneously instructing the other pilot to climb). When this system was first introduced, the large number of false alarms caused pilots to ignore the alerts.98 The U.S. Federal Aviation Administration quickly mandated Traffic Collision Avoidance System use, however, requiring that pilots follow the resolution advisory. Although some false alarms still occur, the Traffic Collision Avoidance System is believed to be responsible for significant improvements in airspace safety.99 In health care, it would be reasonable to expect compliance with the recommendations of a similar decision support system if it had a high predictive value for a critical event. In order to achieve this, however, the system must be highly reliable and broadly implemented with adequate training. The challenge of developing a series of uniform standards in an industry with a patchwork of regulatory agencies is best illustrated by the complicated history of driverless ground vehicles.
Automatic anticollision systems are highly effective in driverless ground vehicles.100 While much of the public is not yet ready or willing to ride in completely autonomous vehicles,51,101 the technology will eventually change from automation that assists the driver to automation that replaces the driver,102 particularly after the public has become more aware of the potential safety improvements.103 Human drivers are currently required to monitor the automation and intervene when something goes wrong, with warnings and alerts that allow the driver to override the automation. The automobile industry has not been able to develop a uniform standard, however, and instead of the national regulations similar to those that govern aviation, many competing approaches to this problem confuse the driving public. This highlights the need to develop standards that apply across platforms and medical specialties, and to avoid the patchwork quilt of automation that typifies electronic health record implementation.
In health care, the International Electrotechnical Commission technical standard 62366 defines a process by which medical device manufacturers can evaluate the usability of a piece of equipment. Usability engineering, combined with information gleaned from adverse event databases, can help to improve patient safety by identifying user errors.104 This human performance evaluation allows the manufacturer to identify and mitigate risks associated with both correct and incorrect use of the device. Development of international standards, especially in the application of human factors to medical equipment design can potentially improve the safety of automated medical devices.105 These standards will help facilitate uniform adoption, as well as monitoring of automated systems so that lessons learned can hopefully be disseminated throughout the industry.
Education is the first step toward the safe use of automated medical technology. The World Federation of Societies of Anesthesia has recently published a position statement that highly recommends training in the use and safety of equipment and suggests formal certification and documentation of this training.106 The Anesthesia Patient Safety Foundation also recommends that clinicians should be formally trained to use new equipment and should be required to demonstrate that they can consistently use medical devices safely and effectively.9 Anesthesia professionals should also receive ongoing training as software is upgraded or new features are added. To mitigate the risks of automation failure, training should include both routine operation and management of system failures. Including unpredictable scenarios or introducing variability into a scenario may improve the ability of a trainee to manage unexpected problems.107 For example, one way to accomplish this might be to include a failure of a monitor or the anesthesia machine during a simulation of malignant hyperthermia. Simulation instructors can facilitate preparation for mode confusion, automation surprises, and malfunctions by adding unexpected equipment failure and automation surprises to their scenarios. Formal training on the use of advanced medical technology and automated devices will become increasingly important as it is added to the environment in which we work. This training should be provided, documented, and possibly required by healthcare institutions.
Clinicians can employ an active scan in order to maintain vigilance and better monitor medical devices. When performing an active scan, the clinician observes the indications on each medical device and activity in the operating room (e.g., the surgical field), moving in an orderly pattern from one to the next. (fig. 2) The information that each device displays is then crosschecked with information from other sources, which will also help the clinician to detect an artifact. For example, the heart rate derived from the electrocardiogram can be compared to the heart rate from the pulse oximeter. This is then compared to an arterial blood pressure tracing. A significant disparity might indicate that one monitor is malfunctioning, or that a physiologic change requires investigation (e.g., electrical activity on the electrocardiogram but no tracing on an arterial blood pressure waveform). Although the benefits of an active scan have not been studied in health care, variations in gaze patterns have been shown to affect the ability of airline pilots to control an airplane during approach and landing.108
Clinicians should be ready to take over control of any medical device if it fails or malfunctions, and should understand where the device is getting information from, how it is being used, and what will happen if that information is flawed. The clinician should carefully review the device’s settings as well as patient information (including physiologic parameters) to understand what the machine has been programed to do and how well it is performing its functions. Automation surprises, mode confusion, and automation failure can often best managed by reverting to the lowest level of automation possible.109 In the case of a ventilator or anesthesia machine that performs an unexpected action, for example, one should revert to manual ventilation. If necessary, the patient should be disconnected from the machine and ventilated with a self-inflating bag. If an infusion pump begins to deliver an incorrect dose of a medication, stop the pump. If necessary, disconnect the tubing from the patient’s infusion line. As medical devices become increasingly automated, manufacturers should include greater transparency in the design as well as effective methods of monitoring automated processes; this will allow practitioners to more seamlessly take over manual control. This can be done by explaining to the clinician where the device is getting its information, how trustworthy that information is, and how it is being used to make decisions.
Inattentional blindness may prevent even a trained observer from seeing something that is unexpected,110 preventing a clinician from detecting an incorrectly programed pump or ventilator. Clinicians should be properly trained to actively search for sources of error in automation. One example of risk mitigation is to examine multiple distinct data points that are associated with a given process to ensure that the programming is correct. In the case of a drug infusion, for example, the clinician can check the weight-based, programed infusion rate and compare that to the rate in milliliters per minute. An infusion that will take significantly less or more time than expected to complete may be a warning that the pump has been incorrectly programed. Although these steps may seem obvious, automation failures and surprises can be confusing and can rapidly progress to become a critical event, so the best time to think through potential problems is before they occur.
It may seem impossible for an individual clinician to change the way that medical technology is designed and marketed. Medical device manufacturers typically respond primarily to the needs of the global market when designing new equipment and not to the requests of individual clinician. Large group practices and health systems may, however, be able to push the market to include training or new safety features. The U.S. Veteran’s Health Administration has analyzed incident reports and device use histories. This information was then communicated to personnel responsible for purchasing new equipment with the explicit goal of “pushing” the market toward safer solutions.104
It is likely that an increasing number of automated processes will be introduced into medical practice as technology continues to improve. Adopting these new systems safely requires that physicians and other healthcare leaders ensure that the unintended consequences of automation can be mitigated. Additional research into how automated medical devices can fail will facilitate improvements in design, use, and training. Much like the airline industry, clinicians should receive training in human–system interactions. This training curriculum should be incorporated into all aspects of medical education, including undergraduate medical education, residency, and continuing medical education. Topics should include vigilance, management of system failures, and maintaining manual skills. Leaders in simulation-based education should develop scenarios that integrate equipment malfunction. Clinicians should receive training in alarm management to minimize the number of false and misleading alarms to which they are exposed in order to prevent alarm fatigue.49 Research is urgently needed in how to keep clinicians engaged in patient care as an increasing number of tasks become automated. Finally, we recommend that professional societies develop guidelines to address these new requirements for training and implementation. These recommendations will help to ensure the safe, effective adoption of automated medical technology in the operating room and throughout the practice of medicine.
The authors wish to thank Anna Clebone Ruskin, M.D. (Assistant Professor of Anesthesia and Critical Care) and Michael F. O’Connor, M.D. (Professor of Anesthesia and Critical Care) at the University of Chicago (Chicago, Illinois) for their thoughtful review of the manuscript and insightful comments.
Support was provided solely from institutional and/or departmental sources.
Drs. Ruskin, Corvin, and Rice are partially supported by Federal Aviation Administration Cooperative Research Agreement 692M151940006: Air Traffic Organization Alarm Management. This funding did not support any of the work involved in the preparation of this manuscript. Dr. Winter declares no competing interests.