It is common clinical practice to administer reduced doses of opioid to patients suffering from hemorrhagic shock to minimize adverse hemodynamic consequences and to prevent prolonged opioid effect However, the scientific foundation supporting this practice is not well established. The aim of this study was to test the hypothesis that hemorrhagic shock alters both the distribution and clearance of opioids using fentanyl in a porcine isobaric hemorrhage model.
Eighteen pigs were randomized to shock or control groups. The animals in the shock group were subjected to hemorrhage using an isobaric method. Pigs in both groups received fentanyl (50 microg/kg) intravenously over 5 min. Frequent arterial blood samples were obtained for radioimmunoassay. Each animal's pharmacokinetic parameters were estimated by fitting a three-compartment model to the concentration versus time data Nonlinear mixed-effects population pharmacokinetic models examining the influence of mean arterial pressure and cardiac index were also constructed. Clinical simulations using the final population model were performed.
The shock cohort reached substantially higher fentanyl concentrations. The shock group's central clearance and central- and second-compartment distribution volumes were significantly reduced. The most useful population model scaled all pharmacokinetic parameters to mean arterial pressure. The simulations illustrated that hemorrhagic shock results in higher fentanyl concentrations for any given dosage scheme.
The essential finding of the study is that fentanyl pharmacokinetics are substantially altered by hemorrhagic shock. The reduced opioid requirement commonly observed during hemorrhagic shock is at least partially attributable to pharmacokinetic mechanisms.
This article is featured in "This Month in Anesthesiology." Please see this issue of Anesthesiology, page 7A.
IT is common clinical practice to reduce the dose of intravenous anesthetic agent in patients suffering from hemorrhagic shock. The clinical rationale for this practice is that reducing anesthetic doses will prevent hemodynamic depression and prolonged anesthetic effect. However, the scientific foundation supporting this clinical tradition is not well established. There is little experimental work providing information about the disposition and action of drugs during hemorrhagic shock, including anesthetics and opioids. 
In theory, hemorrhagic shock could alter the pharmacokinetic disposition of intravenous anesthetics in a variety of ways. Shock, by definition, is inadequate tissue perfusion resulting in anaerobic cellular metabolism and lactic acidemia. This primary cellular pathology inevitably leads to secondary compensatory mechanisms such as redistribution of tissue blood flow, increased sympathetic nervous system activity, and alterations in body water distribution. 
These shock-induced changes obviously impact many physiologic processes that are relevant to pharmacokinetics, including metabolic organ function and blood flow, cardiac output, and protein synthesis.  Thus, the entire spectrum of pharmacokinetic processes potentially could be influenced by shock, including drug distribution, biotransformation, excretion, and protein binding.
Currently, there is little scientific basis for developing an opioid dosing strategy in patients suffering from acute traumatic or surgical hemorrhagic shock. Although clinicians readily accept the notion that hemorrhagic shock alters pharmacokinetics, more detailed knowledge about how drug clearance and distribution are altered in necessary before truly rational dosing recommendations can be made.
The aim of this study was to test our hypothesis that the distribution and clearance of fentanyl would be decreased in a porcine isobaric hemorrhage model. In its broadest sense, the study was intended to elucidate whether the decreased opioid dosage requirement associated with shock has a pharmacokinetic mechanism.
Materials and Methods
Enrollment, Instrumentation and Data Gathering
Experiments were performed on commercial farm-bred pigs of either sex. The study was approved by the Institutional Animal Care and Use Committee at the University of Utah. Eighteen Hampshire-Yorkshire cross-bred pigs were randomly assigned to either control or shock groups.
The animals were fasted except for ad lib water for 12 h before anesthetic induction. Anesthesia was induced intramuscularly with ketamine (10 mg/kg), acepromazine (10 mg), and atropine (2 mg). The animals' tracheas were intubated and mechanically ventilated with isoflurane (1%) in oxygen (100%), keeping the PaCO(2) between 35 and 40 mmHg. An intravenous catheter was placed in an ear vein and lactated Ringer's solution was infused at a rate of 1 ml [middle dot] kg-1[middle dot] h-1using an intravenous infusion pump. Neuromuscular block was provided with pancuronium bromide and tubocurarine (1:1 mixture) as needed.
A femoral artery was cannulated to collect blood samples and to measure mean arterial pressure (MAP), hematocrit, blood gases, and lactate levels. A pulmonary artery catheter was placed via a jugular vein to measure central venous pressure, pulmonary artery pressure, pulmonary capillary wedge pressure, and cardiac output and to collect venous blood gases. A catheter was placed in the aorta via a carotid artery to obtain blood for fentanyl assay and for bleeding. A gastric tonometer was inserted into the stomach to measure gastric intramucosal pH (pHi). Lead 2 of the electrocardiogram was used to measure heart rate. Oxygen saturation was monitored with a pulse oximeter. Temperature was measured in the pulmonary artery and was maintained between 36 [degree sign]C and 37.5 [degree sign]C.
Thirty minutes after the initial instrumentation, baseline values of heart rate, MAP, central venous pressure, pulmonary capillary wedge pressure, cardiac output, pHi, temperature, hematocrit, lactate, and arterial and venous blood gases were recorded. Cardiac index (CI) and oxygen delivery (DO2) values were calculated. These parameters (except pHi and venous blood gases) were recorded every 30 min until 3 h after drug infusion and every hour for an additional 3 h. In the control group, pHi and venous blood gases were measured 2.5 h after drug infusion, and in the shock group, pHi and venous blood gases were measured after establishing hemorrhagic shock and 2.5 h after drug infusion.
Pigs in the shock group were subjected to hemorrhagic shock using a modification of Wiggers' isobaric method. [3,4] Before inducing hemorrhage, 5,000–6,000 U heparin was administered intravenously. Blood was collected in heparinized bags. The animals were bled until the MAP was reduced to 40 - 45 mmHg. This MAP was maintained throughout the study. A bolus of 200 ml lactated Ringer's solution was administered if the MAP was less than 35 mmHg for more than 5 min and was repeated after 5 min if the target MAP was not restored. If the intravenous fluid boluses did not restore the MAP to the target pressure, the heparinized shed blood was transfused in 50-ml aliquots. The hemodynamic and metabolic consequences of the hemorrhagic shock protocol were frequently monitored by measuring cardiac output, hematocrit, arterial pH, pHi, and blood lactate.
Fentanyl (50 [micro sign]g/kg) was infused intravenously over 5 min in both groups using a motorized infusion pump. The shock group received the fentanyl after the target MAP of 40 mmHg had been maintained for 1 h.
Blood Sample Processing and Concentration Assay
Blood samples were collected from the aortic catheter before drug administration (time 0) and at 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 15, 20, 25, 30, 35, 40, 50, 60, 70, 100, 130, 160, 190, 220, 250, 280, 310, 340, and 370 min after drug infusion began. The plasma was separated from the erythrocytes and frozen at less than - 10 [degree sign]C until the time of assay.
Fentanyl concentrations were measured by a radioimmunoassay technique modified from that described by Schuttler and White. [5,6] The fentanyl quantitation limit was 0.1 ng/ml with a paired aliquot coefficient of variation of less than 15% for concentrations greater than 0.1 ng/ml.
The raw concentration versus time data were analyzed using several techniques. First, each animal's pharmacokinetic parameters were estimated. These individual parameter estimates were then plotted against several indices of shock (i.e., subject covariates) to identify relationships that might be used to improve the final population model. A mixed-effect population approach based on NONMEM software  was then used to build the final population model incorporating subject covariates. Finally, computer simulations, including the context-sensitive half-time, were completed to bring clinical meaning to the mathematically based pharmacokinetic analysis. Linear pharmacokinetics were assumed for the purpose of this analysis.
Individual Compartmental Analysis
Using the "two-stage" approach implemented on NONMEM, a three-compartment mamillary model was fit to the raw concentration versus time data to estimate each subject's pharmacokinetic parameters. The triexponential disposition Equation wasparameterized in terms of clearances and apparent distribution volumes. Because the magnitude of errors between the measured concentrations (Cm) and the concentrations predicted (Cp) by the model were presumed to be proportional to the predicted concentration, a proportional (1/Cp2) variance model was used for each fit.
The population parameters from this two-stage approach for both the shock and control groups were calculated by averaging the values obtained from the individual fits. This method is called the two-stage approach because the analysis proceeds in two stages. Pharmacokinetic parameters are first estimated for each individual by nonlinear regression, and these individual estimates are subsequently averaged to obtain the mean two-stage population estimates. 
The two-stage pharmacokinetic parameters from the shock and control groups were contrasted graphically and tested for significant differences using a nonparametric, two-tailed Student t test assuming unequal variance (e.g., Mann-Whitney rank sum test). Statistical significance was defined as a P value of less than 0.05.
Exploration of Parameter-Covariate Relationships
The individual subject pharmacokinetic parameter estimates from the two-stage analysis were regressed independently on each covariate as advocated by Maitre et al.  MAP and CI were the covariates examined (using the average values during the drug administration period). These linear regressions were completed both through the origin and also with an intercept term. The goal of this step was to identify relationships that might eventually be included in the final NONMEM population model. This step was also intended to help characterize the shape of these relationships between model parameters and the covariates.
Nonlinear Mixed-Effects Model Analysis
In contrast to the two-stage approach, wherein the population pharmacokinetic model (i.e., the pharmacokinetic parameters intended to represent the entire population) is obtained by averaging the parameters estimated from individuals, NONMEM simultaneously analyzes data of an entire population and provides estimates of typical values for the parameters along with an estimate of the their interindividual variability within the population studied.
Interindividual error on each parameter was modeled using a log-normal error model:Equation 1where [Greek small letter theta]individualis the true value in the individual, [Greek small letter theta](typical) is the population mean estimate, and [Greek small letter eta]individualis a random variable whose distribution is estimated by NONMEM with a mean of zero and a variance of [Greek small letter omega]2. The estimates of [Greek small letter omega] obtained with NONMEM are similar to the coefficient of variation often used in standard descriptive statistics. Residual intraindividual error was modeled assuming a constant coefficient of variation.
A three-compartment mamillary model without covariates was fit to the fentanyl concentration versus time data with NONMEM using the "first order conditional estimation" method and the "[Greek small letter eta]-[Greek small letter epsilon] interaction" option. Model parameterization and initial parameter estimates were identical to those used with the two-stage approach.
Model Expansion with Covariate Effects
After obtaining the best NONMEM model without covariates, the influence of MAP and CI on the model were examined. Guided by the initial regression analysis exploring the relationship between model parameters and patient covariates, the final model was built using a stepwise approach in which individual covariate effects on each model parameter were incorporated into the model, and the resulting expanded model was examined for significant improvement. A -2 times the log likelihood change of at least 4 was viewed as sufficient justification to include an additional parameter in the model (in the form of a covariate or a covariate plus a constant that represented the addition of two model parameters). A total of 70 different models were tested. The various models were tested both forward (starting with no covariates) and backward (starting with all covariates) to confirm that the observed improvement was not a result of covariate correlation.
The performance of the various population models constructed by NONMEM was assessed in terms of the ability to predict the measured blood concentrations. This was quantitatively accomplished by computing the weighted residuals (WRs). A WR is the difference between a Cmand the C (p) in terms of Cp. Thus, WR can be defined as:Equation 2
Using this definition, the WRs for all the NONMEM population models tested were computed at every measured data point.
Making use of the WR calculations, the overall inaccuracy of the model was determined by computing the median absolute WR (MDAWR), defined as:Equation 3where n is the total number of samples in the study population. Using this formula, the MDAWR for the population models constructed by NONMEM were computed for each model tested. The median WR, a measure of model bias, was also computed for each model.  The performance of the models also was assessed visually by plotting the Cm/Cpversus time and examining the plots for accuracy and bias.
Computer simulations using the two-stage pharmacokinetic parameters were performed to illustrate the clinical implications of the pharmacokinetic analysis when applied to shock and control animals. The first simulation predicts the plasma concentrations that result from a typical fentanyl dosing regimen (100-[micro sign]g bolus injection followed by a 50-[micro sign]g bolus injection 20 min later and a 2.5 [micro sign]g [middle dot] kg-1[middle dot] h-1infusion for 60 min), contrasting the levels obtained in shock and control animals. For this simulation, the animal was assumed to weigh 70 kg (for dosage calculations).
The second simulation predicts the time necessary to achieve 50% and 80% decreases in plasma concentration after termination of a variable-length infusion targeted to a constant drug concentration. These simulations, referred to as the context-sensitive half-time (or 50% decrement time) and the 80% decrement time, [11,12] are based on Euler's solution to the two-compartment model with a step size of 1 second.
Enrollment, Instrumentation and Data Gathering
A total of 18 pigs were entered into the study, two of which were excluded from data analysis. One of the excluded pigs developed hyperthermia during the experiment, its temperature reaching up to 40 [degree sign]C, and the other pig had unexplained hypotension with high cardiac output and elevated lactate values before hemorrhage.
Three pigs did not complete the entire experiment. One pig in the control group died of accidental air embolism through the aortic catheter, and the other two pigs in the shock group experienced severe hypotension after the fentanyl infusion. The data gathered on these pigs before death were included in the analysis.
All animals were between 5 and 10 months of age and weighed an average of 72.2 +/- 8.2 kg. A mean of 1,906 +/- 459 ml blood was removed to achieve the targeted MAP in the hemorrhagic shock group. Table 1shows the average cardiovascular and shock variables and parameters measured in both groups just before drug infusion. The values displayed in Table 1(both averages and variances) are also representative of the measurements made during drug infusion, except that the CI and heart rate decreased slightly in each group.
The infusion scheme applied in this protocol resulted in concentration versus time curves characteristic of brief infusions. The raw concentration versus time data are shown in Figure 1. The shock subject cohort reached substantially higher peak concentrations and showed higher concentrations throughout much of the experiment.
Individual Compartmental Analysis
The raw concentration versus time data were adequately described by a three-compartment model. The individual parameter estimates for each cohort are displayed in Table 2.
Comparison of the absolute volumes and clearances (i.e., not weight normalized) from the shock and control groups showed a number of substantial differences; these differences were statistically significant as judged by the t-test procedure (Table 2). Central clearance was notably lower in the shock group, as were the volumes of the central and second peripheral compartment.
Exploration of Parameter-Covariate Relationships
Plots of the individual parameter estimates versus the covariates showed some important relationships. In particular, there was a strong correlation between central clearance and both MAP and CI. The results of these linear regression, including the coefficients of determination (i.e., r (2)) and P values, are displayed in Table 3. The two strongest relationships are plotted in Figure 2.
Nonlinear Mixed-Effects Model Analysis
The NONMEM population model parameter estimates reflect a midrange of the shock and control groups' two-stage results. The NONMEM parameters are displayed in Table 4.
Model Expansion with Covariate Effects
Of the 70 models tested, the best-performing model in terms of MDAWR and median WR scaled central clearance to CI as suggested by the initial exploration of parameters versus covariate relationships. Alternatively, the best-performing model in terms of the NONMEM objective function value (and perhaps the most practically useful model because its covariate is easily measured) scaled all parameters to MAP with a constant. The typical parameter values for the expanded NONMEM models, including the effect of MAP and CI, are shown in Table 4.
Addition of these covariate effects to the unscaled NONMEM model resulted in an improvement in the objective function values and also in the MDAWR and the median WR. These results, including the MDAWR 10th and 90th percentile values, are shown in Table 5. Plots of the Cm/Cpfor the unscaled and one of the expanded NONMEM population models (scaling all parameters to MAP with a constant) are shown in Figure 3.
The results of several other covariate models that were tested deserve mention. Models that scaled all pharmacokinetic parameters to CI also performed well. In addition, models that scaled only central clearance to MAP (with a constant) or CI (without a constant) performed slightly better than the model that scaled all parameters by MAP or CI. We favored the model scaling all parameters to MAP because models that scale only one or several parameters (but not all) to a covariate can only be implemented on a computer-controlled infusion pump.  Moreover, it can be argued that MAP is a preferred covariate compared with CI because it can be measured repeatedly in a noninvasive way. The parameter values and goodness-of-fit measures for these other models are shown in Table 4 and Table 5.
The simulation examining the concentration versus time profiles that result from a typical dosage scheme in shock versus control subjects suggests that shock subjects received a relative overdose compared with controls. As shown in Figure 4, shock subjects achieved higher concentrations than the control subjects for a typical dosing scheme.
The context-sensitive half-time simulations (50% decrement time) and the 80% decrement time simulations indicate that the pharmacokinetics of fentanyl during infusion will be substantially altered by the shock state. As shown in Figure 5, for both the 50% and 80% decrement times, the values for shocked subjects are substantially longer than those of normal subjects, particularly for infusions lasting longer than 200 min. This implies that fentanyl is indeed longer-acting in the shocked subject cohort. Interestingly, the context-sensitive half-time (50% decrement time) was not different between the shock and control groups until after approximately 100 min. It should be noted that these simulations are based on computer-controlled drug delivery and, therefore, a dosage adjustment for the shock group (based on the shock kinetic model) is assumed.
We applied an isobaric hemorrhage method in a porcine model to examine the effects of hemorrhagic shock on opioid pharmacokinetics (using fentanyl). The essential findings of the study are that hemorrhagic shock results in a significant reduction in fentanyl central clearance, central distribution volume, and the volume of the second peripheral compartment compared with control subjects. These findings are consistent with our hypothesis that hemorrhagic shock alters opioid pharmacokinetics, resulting in higher plasma concentrations for any given dosage scheme.
Inspection of the raw data provides the most intuitively digestible confirmation of our study hypothesis. The shock group showed higher fentanyl concentrations throughout the study. The higher peak concentrations and slower concentration decline later in the study are pronounced.
The pharmacokinetic modeling analysis techniques also confirmed the study hypothesis. Central clearance and central distribution volume from the two-stage pharmacokinetic analysis were substantially different between the two groups. The difference in central clearance between the shock and control groups was particularly marked. The fact that the NONMEM population model performed rather poorly but was substantially improved by the inclusion of shock covariates (i.e., hemodynamic indicators of shock; CI and MAP) also supports the study hypothesis. Scaling clearance to MAP or CI improves the NONMEM population model significantly.
The pharmacokinetic simulations are the most clinically meaningful expression of the study findings. The clinical dosing simulation demonstrates that identical doses in shock animals will presumably result in more pronounced effect that persists longer. Similarly, the 50% and 80% decrement time simulations demonstrate that fentanyl is longer-acting in the shock animals even when a dosage adjustment is made (assuming that the plasma concentrations correlate with drug action).
Several substantial limitations of our study deserve emphasis. Perhaps most importantly, we did not investigate how shock may (or may not) alter the pharmacodynamics of fentanyl. It is impossible to interpret pharmacokinetic data fully without knowledge of the concentration-effect relationship. Because concentration-effect relationships (i.e., pharmacodynamics) are often highly nonlinear, the impact of pharmacodynamic changes on the overall pharmacologic behavior of a drug can be huge. Our experimental design did not permit any speculation regarding pharmacodynamic alterations of shock.
Another obvious drawback of the study is the inherent limitations of an animal model. Although, in general, pigs are thought to be pharmacologically similar to humans, it is difficult to extrapolate the results of our study to humans with confidence. The ethical problems associated with studying the pharmacology of shock in humans make the use of an animal model a necessity, particularly when a carefully controlled study is the goal. Obviously, to be sure that the animals were adequately anesthetized, it was essential to provide anesthetics in addition to the drug being studied.  These additional anesthetics probably influenced our findings (although both groups were exposed to the same influence).
It also should be noted that whenever possible, patients in hemorrhagic shock who require anesthesia are resuscitated with blood products and crystalloid to some extent before administration of anesthesia, and, thus, extrapolating our animal model results (without fluid or blood resuscitation) to human patients in an actual clinical situation must be considered carefully. Finally, it is conceivable that our study design violated the linearity assumption of our pharmacokinetic analysis. The disposition of fentanyl in the shock animals may have been a dynamic process. These problems and others make the investigation of shock a notoriously difficult enterprise in terms of study methodology and its practical application. 
Although relatively little is known about how hemorrhagic shock alters drug disposition, the findings of this study are, in general, similar to those reported for other drug classes during shock. For example, Benowitz et al.  noted significantly higher lidocaine concentrations during hemorrhagic shock in monkeys. They reported a 46% decrease in lidocaine clearance, a 33% decrease in central distribution volume, and a 19% decrease in steady-state distribution volume. In a similar study examining midazolam pharmacokinetics in dogs suffering from hemorrhagic shock, Adams et al.  reported a reduction in central clearance without significant differences in distribution parameters. It is important to underscore the fact that these various studies from the literature used different shock models, and, therefore, strict comparisons are difficult.
It had long been recognized that hemorrhagic shock alters the dose requirement of intravenous anesthetics. As early as 1963, Price,  using mathematical models, speculated that less thiopental is required to achieve a therapeutic concentration in the brain during hemorrhagic shock. More recently, Weiskopf et al.  showed that hemorrhagic shock reduced the dosage of thiopentone or ketamine needed to produce anesthesia in pigs. Although the investigators did not measure blood levels, they theorized that the decreased dosage requirement was at least partially attributable to pharmacokinetic mechanisms. The current study confirms that pharmacokinetic mechanisms are indeed at least partly responsible for the long-observed decreased dosage requirement for opioids during hemorrhagic shock.
The physiologic mechanisms by which shock alters pharmacokinetics are theoretically straightforward. The reduction in central compartment clearance may be attributed to both decreased liver blood flow (i.e., less drug delivered to the liver for biotransformation) and/or decreased hepatocellular function (i.e., impaired biotransformation). However, the literature regarding the disposition of other drug classes during shock is not conclusive about which mechanisms predominate.
For example, some investigators have demonstrated that liver blood flow does not necessarily decrease in exact parallel to cardiac output during hemorrhagic shock, despite profound reductions in cardiac output. Using a radiomicrosphere technique in pigs, Bellamy et al.  could not demonstrate a change in liver blood flow during hemorrhagic shock in a majority of pigs studied. Interestingly, the pigs that showed a significant decrease in hepatic blood flow did not survive the experiment. Dipiro et al.  published similar findings in a partially resuscitated hemorrhagic shock pig model. They showed that although hepatic blood flow did not change, hepatic oxidative metabolic function decreased substantially.
Other investigators have confirmed that hemorrhagic shock does indeed alter hepatic function. Malliwah  reported gross evidence of hepatocyte injury during hemorrhagic shock in a dog model that closely paralleled the decline in cardiac output. Wang et al.  reported similar changes in hepatic function during hemorrhagic shock in rats and noted that the hepatic injury persisted despite fluid resuscitation. Because we did not measure hepatic function or blood flow, we cannot comment on which of these mechanisms may be responsible for the pharmacokinetic changes we observed.
The shock-induced changes in cardiovascular function obviously have an important impact on pharmacokinetics. Cardiac output, MAP, and other hemodynamic parameters are often included as part of physiologic pharmacokinetic models.  Because cardiovascular function plays such a critical role in drug distribution and elimination, it has been the focus of a great deal of research effort.
For example, it has been shown in a sheep model that low cardiac output states result in higher peak concentrations after bolus injection because of slower drug-blood mixing.  The importance of cardiac output as it relates to the initial mixing of a drug and the achievement of its peak concentration is particularly relevant to anesthetics because they exert their effect in the first few minutes after injection.  Henthorn et al.  developed a recirculatory pharmacokinetic model that adequately characterizes the impact of circulatory changes on initial drug distribution. Using alfentanil in human volunteers, they have subsequently shown that inter-compartmental clearance (i.e., tissue distribution) is largely determined by cardiac output.  Bjorkman et al.  further demonstrated that the influence of cardiac output on drug distribution is readily apparent only when cardiac output changes significantly.
These previous findings regarding the linkage of hemodynamics with pharmacokinetics are generally consistent with the results of the current study. Scaling central clearance by CI or MAP improved our population pharmacokinetic model significantly. Although both distribution and clearance parameters showed a reasonable relationship with CI and MAP, scaling central clearance to MAP improved the model the most in terms of the NONMEM objective function value. As for the utility of the model, MAP is perhaps more practically useful than CI because it can be measured repeatedly in a noninvasive way.
However, from a mechanistic perspective, it is probable that CI is the parameter that is actually influencing the pharmacokinetics, whereas MAP is simply a good correlate of the changes in cardiac output. One can imagine various clinical settings in which changes in MAP would not necessarily reflect changes in cardiac output (and thus the usefulness of the model that is scaled to MAP would be suspect).
Interestingly, intravenous anesthesia (without hemorrhagic shock) produces some of the pharmacokinetic changes classically associated with shock, presumably because anesthetics alter cardiac physiology in a way that somewhat resembles mild shock. For example, Thomson et al.  demonstrated that thiopentone and etomidate decrease cardiac output and hepatic blood flow at typical therapeutic concentrations. Mather et al.  showed that propofol and thiopentone decrease meperidine clearance presumably as a result of decreased hepatic blood flow.
A change in fentanyl plasma protein binding (or changes in binding or partitioning to other blood constituents such as erythrocytes) is another mechanism by which shock could theoretically alter fentanyl pharmacokinetic parameters. Only unbound drug is available for biotransformation by the metabolic organs and distribution to body tissues. Changes in protein binding may make for a greater or lesser amount of free drug available for distribution. Benowitz et al.  suggested that the reduction in lidocaine steady-state distribution volume observed during shock may be a result of changes in plasma binding of lidocaine or tissue affinity for lidocaine. Because we did not measure fentanyl binding behavior, we cannot speculate about how protein binding changes might (or might not) influence fentanyl pharmacokinetics during shock.
It is difficult to make specific clinical recommendations based on the findings of this study. If the conclusions of this study are applicable to humans, one would recommend that smaller bolus doses and infusion rates would be necessary to achieve a given fentanyl concentration in the face of hemorrhagic shock. This decreased opioid dosage requirement is well recognized by clinicians, although it has not been investigated in detail. This study suggests that these changes are at least partially a result of pharmacokinetic factors.
The clinical relevance of this line of investigation is a function of the prevalence of trauma in modern society. Blunt trauma as a result of motor vehicle accidents and penetrating trauma secondary to violent crime are common in western culture. [31–33] Anesthesiologists are frequently called upon to anesthetize trauma victims who have ongoing hemorrhagic shock at various stages of resuscitation. Anesthesiologists also sometimes encounter unexpected high-volume blood loss during elective surgery. The implications of shock on anesthetic pharmacokinetics is even more relevant to military physicians, who must strategize about how to manage anesthetics in soldiers with battlefield injuries. 
Today's anesthetic pharmacology database is unsatisfactory in guiding our anesthetic management of patients suffering from hemorrhagic shock. In substantiating that at least some of the reduced dosage requirement of opioids during hemorrhagic shock is caused by pharmacokinetic factors, this report has merely provided a small piece of the missing information.
Additional investigation is necessary to explore further how shock impacts anesthetic pharmacology. The effect of shock on the pharmacokinetics of other drug classes in the anesthesia formulary needs to be studied. Whether pharmacodynamic behavior is influenced by shock must also be examined. The temporal profile of the shock-related changes in pharmacology must be defined. For example, do the shock related pharmacokinetic alterations persist after resuscitation? In addition, are drugs that do not require metabolism by the liver or kidney more resistant to shock-induced pharmacologic changes (e.g., remifentanil)? Finally, do all types of shock influence pharmacokinetics in a consistent fashion? Ultimately, this information should lead to more rational guidelines regarding both the selection and administration of anesthetics to patients in shock.