TRANSLATIONAL science, a new word coined to label the ancient practice of devising something helpful to the sick, long precedes the first physician–scientist, Paracelsus. In Basel, Switzerland, circa 1530, Paracelsus, the miracle doctor known as “the Luther of Medicine,” introduced chemistry to medicine, destroying alchemy and discovering ether analgesia. Rene Laennec introduced the stethoscope in 19th-century France to avoid putting his ear on a bared female bosom. Although ether’s introduction fits the translational science concept, Morton’s inhaler does not, being closer to some of my “gadgeteering.”
In this article, I trace my small role in translating laboratory science to improve the specialty of anesthesiology. My original need for rapid blood gas analysis was for my National Institutes of Health (NIH) dog lab studies of respiratory dead space and lung gas exchange. After the invention and development of blood gas electrodes, I sought other physiologic problems in anesthesia and cardiorespiratory physiology, particularly at high altitude, that required analysis of arterial blood partial pressure of oxygen (Po2), arterial blood partial pressure of carbon dioxide (Pco2), pH, saturation of hemoglobin with oxygen (Sao2), and their derived indices.1This eventually led me, over much of my professional life, to gadgeteer for health care.
From Physics to Gadgeteering for Medicine
I majored in physics at Haverford College in a suburb of Philadelphia, Pennsylvania, during World War II. Because some physicists were exempt from the draft to participate in secret research efforts, after graduation I was ordered to the Massachusetts Institute of Technology Radiation Laboratory, in Cambridge, Massachusetts, where I spent 2 war years developing radar equipment. On August 6, 1945, a bomb, designed by my fellow physicists, ended America’s age of innocence by vaporizing the people of Hiroshima. Yes, the war ended but … Appalled by physics, I resigned from the Massachusetts Institute of Technology to go to medical school, hoping to apply electronics to health care. After finishing the first 2 yr of medical school at the University of Wisconsin in Madison, I transferred to the Columbia University College of Physicians and Surgeons in New York, New York, to get more clinical experience. It was there that I began to gadgeteer.
Respiratory physiology, previously a minor discipline, suddenly became important when World War II pilots trying to fly higher than their enemies became hypoxic without pressurization, lost consciousness, and crashed. Physicist Glen Millikan (1906–1947) developed oximetry in 1940 as a pilot warning, but it only became practical after the invention and development of pulse oximetry in about 1980. The polio epidemics drove the development of artificial ventilation, needing carbon dioxide analysis to answer the question of how much to ventilate a paralyzed patient. Hypothermia and cardiac bypass needed ways of ensuring adequate oxygen and normality of blood acid–base. With my unusual mix of physics, electronics, physiology, and anesthesia training, I was able to contribute to these rapidly expanding fields of medical science. As is the nature of scientific discovery, some projects were successful and others flopped.
Electrophrenic Respirator: Translational Research Flop No. 1
In the fall of 1947, Stanley Sarnoff, Ph.D. (1917–1990), from Harvard School of Public Health in Boston, Massachusetts, visited Columbia and demonstrated that breathing could be driven by electrical stimulation of one phrenic nerve with a skin surface electrode. I decided to build an easily portable electrophrenic respirator. The physiology department provided space and tools. My paper describing it won the $500 Borden student research Award. Virginia Apgar invited me to try it on dying premature infants. The diaphragm moved, but air did not (fig. 1).2
Becoming a Reckless Resident
In the spring of 1951, I was considering postdoctoral research positions and visited four biophysics departments. Before making a decision, I met with Robert Dripps, M.D., at the University of Pennsylvania in Philadelphia. Within 5 min, he persuaded me that anesthesia would be the best field for me to apply electronics to medicine.
When I started the anesthesia residency, the only monitor was a blood pressure cuff. I teamed up with Peter Safar, M.D. (1924–2003), then in his second year of residency, and we tested a new relaxant, succinylcholine, on each other before using it in patients. Peter gave me 20 mg, causing instant apnea with my still mobile arm trying to grab the oxygen mask. I ached for a week from the fasciculations.
Measuring the Uptake of Nitrous Oxide during Anesthesia
Dripps had a teaching rule: no pentothal for the first 6 months of training. One had to learn how to induce anesthesia in an unwilling patient, using ether and nitrous oxide after morphine and scopolamine premedication, and how to deal with vomit. Most ear, nose, and throat procedures were performed without an endotracheal tube under nitrous oxide and ether. With enough morphine and 80% N2O, ether was not needed. One could tape the mouth shut, plug up the other nostril, and connect a nasal airway to the anesthesia machine. That trick allowed me to measure uptake of nitrous oxide.
Seymour Kety, M.D. (1915–2000), working with Carl F. Schmidt, M.D., Sc.D. (1893–1988), in Pharmacology at the University of Pennsylvania, devised the nitrous oxide method of measuring cerebral blood flow (CBF) and had recently published a major study of uptake and excretion of nitrogen, with application to other gases.3I decided to measure how fast and how much nitrous oxide was absorbed into the body during routine anesthesia. I connected the airway of a preoxygenated barbiturate-anesthetized patient to a 5-l spirometer prefilled with nitrous oxide, a closed circuit with carbon dioxide absorber. I had to squeeze the sampling bulb of a Pauling paramagnetic oxygen analyzer, adjust both flowmeters to set and keep oxygen at 20% and spirometer volume constant, and write flow rates and dry test gas meter volumes every few seconds. And, incidentally, I was caring for the anesthetized patient without a research consent form. I found that uptake declined with the square root of time, starting with nearly 1 l/min. The American Society of Clinical Investigation scheduled my abstract for a plenary session in April 1953, and I had the honor of being elected to the American Society of Clinical Investigation the next year.4
After 6 months of clinical work, I spent a research year in respiratory physiology, studying the carotid body pharmacology of papaverine in dogs and the effect of helium on human anatomic dead space with Julius Comroe at the University of Pennsylvania Graduate School of Medicine. Then the doctor draft caught me and I joined the US Public Health Service at the new NIH Clinical Center Hospital the day it opened, July 1, 1953. With 4 days a week for research, I stayed at NIH for 5 happy and productive years among brilliant colleagues.
A Transistorized Three-function Monitor: Translational Research Flop No. 2
For cardiac and brain surgery at the NIH, we cooled anesthetized patients to permit brief interruption of circulation. No electrical monitoring was permitted because we used flammable anesthetics. I built a transistorized monitor that displayed body temperature, beeped with each cardiac R wave, and provided heart and lung sounds from a modified esophageal catheter. I could not patent it, being a government employee. Burdick Corporation (Milton, WI) agreed to market it. They liked the name I gave it, Telecor, but dumped my design and built a finger tip pulse monitor that chirped (fig. 2). It was useless.
Developing Blood Gas (Carbon Dioxide and Oxygen) Electrodes—Stow’s Carbon Dioxide Electrode: Translational Success No. 1
In the NIH laboratory, I studied hypothermic pulmonary function in dogs because an article by John Osborn, M.D., had claimed that cooling blocked carbon dioxide excretion.5This seemed unlikely, and I suspected that the blood Pco2values had not been corrected to body temperature. I decided to carefully measure Pco2at body temperature and determine the effect of temperature on blood Pco2in vitro . In 1953, analysis of blood pH, Po2, and Pco2was laborious and inaccurate. Blood Pco2had to be calculated from pH and plasma carbon dioxide content measured by the Van Slyke manometric apparatus using the Henderson Hasselbalch equation. I spent a year perfecting and publishing improvements of the methods.6,7The carbon dioxide excretion “block” in hypothermia disappeared when all these corrections had been made, demonstrating that lung function does not deteriorate with cooling. Osborn graciously acknowledged his error. While we had improved the measurement of Pco2, we could still only analyze 10 samples a day. We were soon rescued from that slow method by an invention.
In August 1954, I went to Madison, Wisconsin, for the fall meeting of the American Physiologic Society, where I heard a report of an invention by Richard Stow, M.D., from the Department of Physical Medicine at Ohio State University Medical School in Columbus, that changed my life. Stow was involved in the iron lung care of polio patients at Ohio State. They needed rapid arterial Pco2analyses. Stimulated by an article about ion-selective electrodes, he invented a Pco2electrode (fig. 3).8He knew that rubber is permeable to carbon dioxide but not acid ions. He wrapped a latex membrane over a combined pH and reference electrode, wet with distilled water. It responded to changes of carbon dioxide tension. He concluded his talk by saying that the electrode had a serious incurable drift and could not be made useful. He had not patented the device because Beckman had declined his invitation to buy it. During the discussion, I asked whether he had considered adding bicarbonate in the film of water. Stow said bicarbonate would buffer the H+so there would be no signal. I disagreed, being only too familiar with the Henderson Hasselbalch equation. We agreed that I would try adding soda. A few days later, I assembled a Stow-type carbon dioxide electrode with 25 mm sodium bicarbonate and 0.1 m NaCl in the liquid film between a latex film and the pH-sensitive glass. It worked well and was stable. I notified Stow that I would build a working electrode and study it. He apparently dropped the project and 3 yr later published a short note on the invention and its problems before the bicarbonate idea.9
I built a cuvette for the carbon dioxide electrode and mounted it in a 37°C water bath. It needed a defined layer of electrolyte, for which I first used cellophane, between the latex and the pH glass surface. A 10-fold rise of Pco2increased hydrogen ion activity 10-fold (a pH decrease of 1 unit), twice as much as with distilled water. Bicarbonate also buffered cations leaching from the pH glass. These modifications of Stow’s great idea cut analysis time from an hour to 2 min.
Clark’s Oxygen Electrode: The Start of the Blood Gas Revolution
Leland Clark, Ph.D. (1918–2005), a liver enzyme research chemist at Antioch College, Yellow Springs, Ohio, had built a successful bubble type blood oxygenator to perfuse livers.10To measure Po2in the oxygenator, he turned to polarography. Knowing that polarographic cathodes were poisoned by blood proteins, he covered a polarographic cathode with cellophane. It worked, but the signal was extremely sensitive to blood flow. He changed to a much less oxygen-permeable polyethylene membrane that reduced the flow sensitivity. Both of these covered cathodes used a reference electrode in contact with the blood. The polyethylene variation worked because of electrical leakage under the edges of the membrane. On October 4, 1954, Clark suddenly experienced an “Aha!” moment, realizing that he could make an electrically insulated polarographic sensor with cathode and reference electrode combined, permitting it to work in either air or liquid. He constructed and successfully tested one the same day, and promptly began a patent application.
Before 1956, physiologists used Riley’s bubble method for blood Po2measurement, although it was slow, inaccurate, and useless over 95% saturation. In early 1956, I invited a group of respiratory physiologists to discuss Po2measurement at the Federation of American Societies for Experimental Biology annual April physiology meetings in Atlantic City, New Jersey. At that meeting, Leland Clark announced and demonstrated his invention of a polarographic oxygen electrode.10He had arranged to have it manufactured by Yellow Springs Instrument Company (Antioch, OH) (fig. 4). Many of us went home and ordered one. Clark’s stunning disclosure triggered the blood gas analysis revolution and launched a billion-dollar business.
Clark sold his patent to Beckman (Fullerton, CA) in 1955. Their lawyers rewrote it, claiming it covered all membrane-based electrochemical sensors, although they (but not Clark) knew, from Stow’s letter offering it to Beckman, that Stow’s carbon dioxide electrode invention preceded Clark’s patent. Sixteen years later, after much litigation in which Beckman sued a competitor, the court declared Beckman’s version of Clark’s patent fraudulent. Beckman never paid Clark for his valid patent. Later, Clark invented the glucose and lactate electrodes and a stable fluorocarbon emulsion artificial blood. Now, after 50 yr, every blood gas analyzer in the world contains a Clark-type polarographic oxygen electrode and a Stow–Severinghaus-type carbon dioxide electrode.
The Blood Gas Analyzer: Translational Success No. 2
With Clark’s approval, I decided to use his electrode and my modification of Stow’s carbon dioxide electrodes in a blood gas analyzer. Because of its large platinum cathode, in unstirred blood Clark’s electrode signal drifted slowly down to less than half the actual Po2. I built a cuvette with a tiny stirring paddle, but blood still read 10–15% too low compared with gas calibration. I concluded that it had to be calibrated with blood of a known Po2, so I added a tiny tonometer to the thermostat. My design was constructed by the physiology department machine shop in Iowa City, Iowa, during my second year of residency with Stuart Cullen, allowing me to display the first blood Po2and Pco2analyzer at the fall American Society of Anesthesiologists meeting in 1957 (fig. 5) shortly after Comroe had recruited both Cullen and me to the University of California, San Francisco (UCSF).1
I added a pH electrode, making the first three-function blood gas analyzer in 1959. Beckman had redesigned Clark’s electrode, reducing the cathode diameter to 10 μm. This tiny cathode reduced so little oxygen that the stirring paddle and tonometer were no longer needed.
To avoid the need to create a liquid junction when measuring pH, I designed a pH electrode with built-in open liquid junction to a reference electrode. This open liquid junction method was patented by UCSF and is still used in most blood gas systems.11
After the first International Anesthesia History Association meeting in Rotterdam in 1982, Poul Astrup, M.D., Professor of Clinical Chemistry in Copenhagen, Denmark, invited me to coauthor his book on the history of blood gases, acids, and bases.12He wanted me to begin my review with his work on Pco2analysis during the 1950s polio epidemic in Copenhagen. My 200-page contribution on recent developments had to be cut to 31 pages, the full version finally appearing later.13
Central Carbon Dioxide Chemoreceptors and Altitude
The interest of anesthesiologists in high-altitude research derives from a need to safely study hypoxia, the most common cause of injury and death during and after anesthesia.
In the early 1960s, Robert Mitchell, M.D., my laboratory colleague at the Cardiovascular Research Institute at UCSF from 1958 to 1990, and Hans Loeschcke, M.D. (1912–1986), visiting us from the University of Goettingen, Germany, discovered the brain’s carbon dioxide sensor on the ventrolateral surface of the medulla of cats.14These sensory cells responded to their extracellular fluid (ECF) pH, which is controlled by blood Pco2and cerebrospinal fluid (CSF) bicarbonate. They do not respond to arterial blood acidosis. We thought that CSF acid–base regulation might differ from blood, which would explain some still mysterious effects seen during acclimatization to high altitude and after descent.
Stimulated by Nello Pace, Ph.D. (1916–1994), Professor of Physiology in Berkeley, California, in the 1950s the University of California had built several high-altitude facilities for research ranging from astronomy and general biology of plants, trees, and animals to human acclimatization study. These laboratories were located in a mountain range east of the Sierra and Owens Valley, just northeast of Bishop near the bristlecone pines, the world’s oldest living things. In July 1962, four of us set up a study at the Pace-Barcroft Lab in the White Mountains at 3,810 m. altitude. We volunteered to tap each other’s lumbar CSF, measuring pH and bicarbonate, first at sea level and then daily during a 4-day stay at the high-altitude laboratory. Blood and CSF pH and Pco2analyses were performed with our portable, homemade blood gas apparatus connecting a lumbar puncture needle directly to the electrodes, to avoid loss of carbon dioxide.
The discovery of this first altitude trip was that CSF bicarbonate decreases within hours at altitude, whereas blood bicarbonate and base excess decrease slowly over the first week.15We initially (and mistakenly) inferred that CSF pH must be regulated by active transport. That remains unproven because hypoxic lactic acid and brain tissue strong ion buffering are probable causes. We assumed, also incorrectly, that pH would be restored to its normal value with time. It is not, because the strong carotid body hypoxic drive at altitude is partly offset by continued CSF alkalosis.
Tom Hornbein, M.D., Professor of Anesthesia at the University of Washington in Seattle, joined us in 1964 as one of seven subjects and as investigator for a study of CBF at altitude. We sampled jugular venous blood while subjects breathed 15% N2O to measure CBF (fig. 6).16CBF was increased approximately 25% on the initial altitude test, but decreased nearly to normal by the third to fifth day.
In 1964 in the Peruvian Andes at Cerro de Pasco, 4,389 m altitude, Cedric Bainton, M.D., from the Department of Anesthesia at UCSF, and I studied high-altitude native respiratory responses to carbon dioxide and hypoxia. We found that in normal adult natives, carotid chemosensitivity to isocapnic hypoxia was only 26% of sea level normal.17In natives with a hematocrit higher than 70%, the level defined as chronic mountain sickness, hypoxic ventilatory response was only 11% of normal.18We later showed that this blunting was nearly irreversible even in young people born at altitude but living for years at sea level.19John Weil et al. 20showed that the blunting in adults was gradual over 5–20 yr at altitude of 3,100 m.
The altitude studies suggesting active transport across the blood–brain barrier led me to spend 1964–1965 in Copenhagen, Denmark, with Hans H. Ussing (1911–2000), a world-renowned membrane transport expert, and with other Scandinavian researchers, especially with Niels Lassen in CBF studies, with Poul Astrup and Siggaard Andersen in acid–base analysis, and with the Radiometer Company (Copenhagen, Denmark) in the blood gas field. I also participated in teaching anesthesia-related physiology to third-world anesthesia students in the World Health Organization program in Copenhagen.
My work with Ussing failed to establish evidence for active transport regulation of CSF pH. During that year, Niels Lassen and I devised a simple experiment on ourselves to prove that CBF is regulated by the pH of the cerebral arteriolar ECF, not the pH or Pco2of the brain tissue. The response time constant of CBF to a step fall of Pco2produced by voluntary extreme hyperventilation was approximately 25 s, long enough to increase vascular wall pH, whereas the tissue washout time constant for carbon dioxide (and thus ECF pH) was approximately 2–4 min.21We estimated CBF from the change in oxygen saturation of internal jugular venous blood as flow slowed with hypocapnia. An incidental effect of being a subject in our experiment was a unilateral paralysis of my tongue lasting 3 days after a difficult puncture by Lassen of my right internal jugular vein.
In 1972, Sørensen, Lassen, I, and others studied brain blood flow and metabolism of natives of La Paz, Bolivia, at 3,800–4,300 m altitude. Flow was approximately the same as at sea level, but acute oxygen breathing reduced the native’s CBF 11%, indicating that they live with continuous significant ambient hypoxic vasodilation.22
Blood Gas Calculator “Slide Rule”
Discussions and research collaboration23with Poul Astrup and Siggaard Andersen inspired me to design a slide rule to make blood gas calculations. Slide rules, now mostly forgotten, permit one to multiply and divide by adding logarithmic scales—just what blood gas computations needed. One side computes the human oxygen dissociation curve at known base excess, pH, and temperature, with several calculations often needed by respiratory physiologists and chemists; the other side calculates base excess from measured pH, Pco2, and hemoglobin and solves the Henderson Hasselbalch equation (pH, Pco2, and bicarbonate).24Few will remember this device, but I continue to find that it helps in doing old-fashioned respiratory physiology editing.
In 1925, Gilbert S. Adair (1896–?) had published an equation of the human oxygen dissociation curve with eight constants. Cambridge’s Francis John W. Roughton, Emertius Plummer Professor of Colloid Science (1899–1972) had devoted years trying to match these constants to actual data. He spent the summers of 1968 and 1969 working with us to define the extreme upper and lower ends of the human oxygen dissociation curve.25Roughton was disappointed because accurate data did not support the Adair equation. However, using these data I generated a far simpler and more accurate cubic human oxygen dissociation curve equation (fig. 7).25,26
The easy availability of blood gas analysis and the introduction of new “base excess” and later “standard base excess” terminology by Siggaard Andersen led to controversy, which came to be called “the great transatlantic acid–base debates,”27and to several competing descriptions of acid–base abnormalities, such as strong ion difference, and anion gap. Eventually, Robert Schlichtig, M.D., Alan Grogono, M.D., and I showed that all acid–base schemes are correct, although only standard base excess correctly expresses the acid–base status of whole-body ECF.28Grogono devised the most useful graphic acid–base chart, sometimes called a “grogogram,” a plot of standard base excess versus Pco2. His interactive Web site†lets one perform acid–base calculations and obtain probable diagnoses.
Transcutaneous Monitoring of Po2and Pco2: Translational Success No. 3
In the 1960s, George Gregory and colleagues at UCSF had introduced positive end-expiratory pressure and continuous positive airway pressure, the famous positive pressure treatments of premature infants with atelectatic lungs. It was soon apparent that these infants were blinded by excessive oxygen. Transcutaneous blood gas monitoring was developed primarily to avoid oxygen-induced retrolental fibroplasia. A group led by Dietrich Lübbers, Ph.D.29(1915–2000), at the Department of Physiology, University of Marburgh, Germany, showed that skin surface Po2under an oxygen electrode heated to 44°C accurately monitored arterial Po2. I assembled a group for further studies, leading to a published international conference on these monitors.30I developed a transcutaneous Pco2electrode and showed a way to calibrate it with a known gas Pco2to make it read arterial Pco2(fig. 8).31I found that it was both chemically and electrically possible to combine the oxygen and carbon dioxide electrodes under a single membrane. My son Ed and I designed and built 10 prototype combined oxygen–carbon dioxide electrode monitor controllers used for a multi-institutional test of accuracy sponsored by the Radiometer Co.32This became the combined transcutaneous blood gas monitor, now called TINA (Radiometer Co.).
Mass Spectrometer Multi–Operating Room End-tidal Anesthetic and Respiratory Gas Monitoring
Between 1975 and 1985, my anesthesiology colleague Gerald Ozanne, M.D., technician Bill Young, and I developed a multiplexed mass spectrometry system for use in the UCSF Moffitt Hospital’s 10–operating room suite. A single central mass spectrometer sequentially analyzed, through long sampling catheters, the inspired and expired gases from each of the operating rooms, providing the anesthesiologists with minute-by-minute analyses (fig. 9).33Ed worked with me in the design and construction of the calibration and control unit. We also invented an automated calibrator that periodically created known concentrations by bubbling oxygen through the three liquid anesthetics in widespread use. The mass spectrometer terminal in each operating room became a communication system before the Web. It permitted Bill Young to monitor each operating room after he moved to New York. He called me one day, worried about operating room 5. I called the attending and said, “New York wants to know why your patient in room 5 has a Pco2of 80.” He said he was just trying to get the patient to breathe although he was still paralyzed.
Following our designs, similar systems made by two companies were installed in hundreds of hospitals in the 1980s. Weaknesses of these systems included their high maintenance costs, the need for trained technical personnel, and the possibility that a failure of the mass spectrometer or multiplexer would affect all the operating rooms in an installation. These systems were abandoned in favor of simpler stand-alone anesthetic and respiratory gas monitors that became available around 1995—a short-lived translational success.
Human Oximetry Laboratory
In 1985, I established at UCSF a Human Research Laboratory to test in volunteers, mostly UCSF students, the accuracy of pulse oximeters in hypoxia down to 55% Sao2, because errors and differences between manufacturers only showed up at low saturation, especially in anemic individuals.34Our many studies helped approximately 30 manufacturers to improve their devices and provided data for US Food and Drug Administration approval. In thousands of these tests over 23 yr, no one has complained or been injured by hypoxic testing, occasionally to less than 50%. Testing is still being done approximately twice a week (fig. 10).
Anesthesia research has been generously supported for most of the past 50 yr, primarily by the NIH. The vision of our leaders, especially Emanuel Papper, M.D., Ph.D. (1915–2002), in creating and maintaining such support has made it possible for a small number of researchers to spend nearly full time in the laboratory. The most publicly evident contribution of research has been a nearly 1,000-fold improvement in the safety of anesthesia. Research has provided the tools and agents we use, transformed the teaching methods and content, and made the experience of anesthesia less traumatic. The quality of young anesthesiologists has steadily grown as the esteem of the profession has flourished, as research penetrated every aspect of our activities. The current atrophy of academic medical funding threatens the remarkable productivity of anesthesia research.
In medical school and later on as a postdoctoral fellow, when I was considering anesthesia as a specialty, a colleague tried to dissuade me by saying that I would be wasting my intellectual life. Sixty years ago, anesthesia was not regarded as a scholarly pursuit by academic medicine—it is now. I submit that research was largely responsible for transforming anesthesia from a surgical service into its central role in academic medicine and its many roles in today’s health care.
Gadgeteering has been great fun, even when of little use. In translational science, failure is far more common than success. According to a recent report,35“Of 101 very promising claims of new discoveries with clear clinical potential … only 5 …[became] licensed for clinical use [in 25 yr], and only one had extensive clinical use.” The median time lag between discovery and approved clinical use was 17 yr for the few successful innovations.35By this standard, my colleagues and I have been very lucky. But, dear reader, beware: “When technology is master, we shall reach disaster faster.”36
To Robert D. Dripps, M.D., I owe the choice of a career in anesthesiology research rather than in nonclinical biophysics. To Julius H. Comroe, M.D. (1911–1984), and Stuart Cullen, M.D. (1909–1979), I owe my privileged 45 yr of essentially full-time research at the University of California, San Francisco, participating in its transformation to the best biomedical research institution in the world. My debt is huge to my many associates, but particularly to Robert Mitchell, M.D. (Department of Medicine and Anesthesia, University of California, San Francisco, California), Edmond I. Eger II, M.D. (Department of Anesthesia, University of California, San Francisco), A. Freeman Bradley, B.A. (University of California, San Francisco), and three Danes: Poul Astrup, M.D. Copenhagen, 1915–2000), Ole Siggaard Andersen, M.D., Ph.D. (Professor Emeritus, Clinical Biochemistry, University of Copenhagen, Denmark), and Niels Lassen, M.D. (Professor of Clinical Physiology, University of Copenhagen, 1926–1997). For 30 yr, I was supported by a National Institutes of Health Research Career Award.
I thank Monica Nicosia, Ph.D. (Biomedical Writer, Bryn Mawr, Pennsylvania), for her assistance in preparing the manuscript.