The methodology used during the development of American Society of Anesthesiologists evidence-based practice parameters, from conceptualization through final adoption of the documents, is described. Features of the methodology include the literature search, review and analysis, survey development and application, and consolidation of the full body of evidence used for preparing clinical practice recommendations. Anticipated risks of bias, validation of the process, and the importance of the documents for clinical use are discussed.

The American Society of Anesthesiologists (ASA; Schaumburg, Illinois) annually prepares evidence-based practice parameters in the form of clinical practice guidelines and advisories. These documents are extensively sought after by anesthesiologists and other healthcare providers who seek to obtain guidance on a diverse range of clinical topics. As early as 1997, the ASA was recognized as a world leader in the adoption of standards of care and guidelines for practice; in 2000, the Institute of Medicine noted: “The gains in anesthesia are very impressive and were accomplished through a variety of mechanisms including improved monitoring techniques, the development and widespread adoption of practice guidelines, and other systematic approaches to reducing errors.”1 

ASA’s evidence-based practice parameters posted on the ASA and Anesthesiology websites are queried by millions of practitioners annually,2  and the home pages of both websites contain a dedicated heading that directs readers to these documents. The methodology used for developing ASA practice parameters incorporates a traditional evidence-based approach supplemented with several unique features that enhance the accuracy, quality, and acceptability of these documents by practitioners in anesthesia and many other medical specialties. (See box 1 for elements of a high-quality practice parameter.)

ASA practice parameters are indispensable resources for many providers of health care. Although the ASA produces a variety of documents, some in the form of practice standards, practice alerts, consensus statements, and policy statements, practice parameters differ in that they are more thoroughly evidence-based, are solely dedicated to clinical issues and patient safety, and are broader in scope rather than being limited to a few topics or issues.

Box 1. What to Look for in Research Using This Method
Elements of a High-quality Practice ParameterLiterature search:
  • Use of an evidence model to guide the literature search.

  • Comprehensive searches that include multiple databases (e.g., Pubmed, EMBASE) supplemented by searches from article references and citations supplied by task force members and participating organizations.

  • Keeping a record of the search process using a “preferred reporting items of systematic reviews and meta-analyses (PRISMA)” flow diagram.

Literature review:
  • Only including literature containing original data.

  • Use of peer-review journals, except for selected patient safety issues (e.g., operating room fires).

  • Accepting and categorizing studies based on research design (e.g., randomized controlled trials [RCTs], nonrandomized comparative studies, observational literature).

  • Dividing study categories into quality levels based on study replication and statistical analyses.

  • Using a Data Extraction Workbook to guide the organization and presentation of literature and to provide a compact and clear overview of the accumulated literature.

Data analysis:
  • Conducting meta-analysis of RCTs when sufficient numbers of studies are available.

Surveys:
  • Surveying experts, members of the organization and members of participating organizations.

  • Conducting survey analyses and reporting of findings.

Consolidation of evidence:
  • Applying a “best available evidence” approach for literature.

  • Reviewing and considering multi-source evidentiary information for developing recommendations.

  • Reporting evidence from all available sources including RCTs, observational literature, case reports, surveys, open forum testimony, web postings and personal communications.

Transparency:
  • Clear recommendations using a declarative (action-oriented) approach.

  • Separate sections in document to report literature findings, survey findings, and recommendations.

Additional elements:1
  • Disclosure of funding sources.

  • Disclosure and management of financial conflicts of interest.

  • Use of a multidisciplinary group.

  • Methodologist involvement in the process.

  • Inclusion of patient and public perspectives.

  • Use of a systematic review of evidence.

  • Grading the quality or strength of evidence.

  • Reporting of the benefits and harms of each recommendation.

  • Evidence summary supporting recommendations.

  • Specific and unambiguous articulation of recommendations.

  • External review.

  • Periodic updating.

1Selected from the Institute of Medicine’s “Standards for Developing Trustworthy Clinical Practice Guidelines”15 

This article discusses the methodology and processes used to produce ASA evidence-based practice parameters, offered in the form of practice guidelines and practice advisories.3  The evidence-based approach incorporates predefined criteria with a systematic approach to the collection, assessment, and analysis of evidence from the published scientific literature. Information collected from other sources and how it is applied toward practice parameter recommendations is described. External validation of these documents and their usefulness in clinical practice is also discussed.

Patient safety documents and guidelines are plentiful; before the 1990s, they typically consisted of consensus-based papers prepared by a select group of knowledgeable practitioners who produced statements and recommendations that were derived from their own experience and background. Literature was obtained and presented in a narrative review format selected by the group to support their views on best practice. In 1990, the ASA was advised by the Agency for Health Care Policy and Research of the National Institutes of Health of new legislation to develop, review, and update clinical guidelines for the purpose of improving and standardizing medical practice.4,5  Soon thereafter, the ASA established the Ad Hoc Committee on Practice Parameters, and in 1991, this committee began preparation of the ASA’s first two evidence-based practice guidelines: the Practice Guidelines for Management of the Difficult Airway6  and the Practice Guidelines for Pulmonary Artery Catheterization.7  The guidelines were well received, and the difficult airway guidelines were subsequently updated in 2002 and again in 2013,8  with a third update scheduled for completion in 2020.

The evidence-based approach is designed to maximize the collection and evaluation of evidentiary information by accessing scientific, observational, and consensus-based sources.9,10  The goal is to ensure the completeness, accuracy, and transparency of evidentiary findings, both in the scientific literature and in opinion-based approaches, thus systematizing the process. Because some areas of practice are not necessarily amenable to scientific research or when scientific literature is sparse or unavailable, structured opinion surveys and other types of information are relied upon as literature supplements to provide guidance on optimal practice. Other forms of opinion, such as open forum presentations at professional medical meetings and input provided from the general public, medical professionals, and other medical professional organizations, combined with the available literature offers a broader and more thorough base of information to solidify confidence in the integrity of the clinical recommendations offered.

Conceptualization

Any endeavor intended to systematize the collection of information must begin with conceptualization of the intended product. The ASA Committee on Standards and Practice Parameters, under the direction of the Section on Professional Affairs, first identifies and discusses issues of concern identified by committee members at the ASA annual meeting and then prioritizes and assigns a task force to create and refine a practice parameter that will address the intended goals of the committee. The composition of the task force typically includes academic anesthesiologists, private practitioners, generalists, relevant subspecialists, pediatric and adult anesthesiologists, and often other specialists outside of the specialty of anesthesiology. At least one member of the task force is a representative of the ASA Committee on Standards and Practice Parameters and provides direction to the team on the rigorous process to be followed during the development of the guideline or advisory. In addition, at least one nonclinical Ph.D. methodologist with training in research design and statistics serves on each task force to assure that the process meets the exacting requirements for scientific findings, to direct the survey process, and to assist in the preparation of the documents.

Conceptualization of a practice parameter’s structure and content begins by defining goals and objectives concerned with the intended patient care topics and issues. This is initially accomplished with a “conceptualization survey,” whereby the task force members independently respond to questions addressing clinical goals, patients of concern, interventions that potentially impact patient care, and expected benefits the practice parameter is expected to provide. This survey is deliberately generic and open-ended so that the broadest range of issues associated with each topic may be considered (table 1).

Table 1.

American Society of Anesthesiologists Practice Parameters Conceptualization Survey

American Society of Anesthesiologists Practice Parameters Conceptualization Survey
American Society of Anesthesiologists Practice Parameters Conceptualization Survey

Information collected from the conceptualization survey is then summarized, and a draft “evidence model” is created. Next, the task force meets at a central location to discuss and refine the evidence model. The model is specifically designed to answer the healthcare provider’s question: “If I provide a specified intervention, will that intervention improve patient care”? Accordingly, the evidence model consists of a framework for listing proposed clinical interventions and expected outcomes, organized in a time-sequential manner to approximate when the need for the intervention would arise. The model contains inclusion and exclusion criteria for types of patients, procedures, providers, and settings, as well as lists of interventions and outcomes, which when paired are referred to as evidence linkages. The evidence linkage forms the basis upon which all evidence is collected and guides the eventual structuring of the practice parameter. Table 2 illustrates a completed evidence model that, in addition to the aforementioned criteria, contains inclusion/exclusion criteria for literature and survey data.

Table 2.

Example of an Evidence Model

Example of an Evidence Model
Example of an Evidence Model

Literature Search and Review

The evidence-based approach for collecting and evaluating scientific literature requires several conditions to be met. The first among these is to be as complete and systematic as possible, meaning that all types of study designs are initially acceptable for review and organized into a suitable schema. All relevant healthcare databases are searched, beginning with the most common, such as PubMed, Embase, Web of Science, and the Cochrane Central Register of Controlled Trials, as well as more targeted national and international sources. These citations are combined with citations obtained from direct internet searches; manual searches of references located in reviewed articles; and references provided by committee members, task force members, and other individuals or organizations.

The search focuses on studies reporting original findings from peer-reviewed journals. (Exceptions may be made for important safety issues, e.g., operating room fire reports.) Editorials, letters, and other articles without useful data are excluded, as are unpublished data. Upon completion of the search, the search strategy is recorded, and a PRISMA (Preferred Reporting Items for Systemic Reviews and Meta-Analysis) flow diagram is prepared for inclusion as supplemental information for the published practice parameter. The PRISMA flow diagram graphically illustrates the search and review process from the initial search through the final review and acceptance of literature for inclusion in the practice parameter (fig. 1).

Fig. 1.

A PRISMA (Preferred Reporting Items for Systemic Reviews and Meta-Analysis) flow diagram. The excerpt is from the Practice Guidelines for Moderate Procedural Sedation.29 

Fig. 1.

A PRISMA (Preferred Reporting Items for Systemic Reviews and Meta-Analysis) flow diagram. The excerpt is from the Practice Guidelines for Moderate Procedural Sedation.29 

Close modal

Upon completion of the initial search, the review process is initiated with particular attention given to the recognition and identification of systematic biases that may be contained within the study designs, statistical analysis, and other information reported in the studies. When exceptional bias is present, the study is removed from consideration as evidence. When potential bias is suspected but not confirmed, it is flagged for further review, and a decision is made among the methodologists and clinician task force members either to include it with a notation or warning or to reject it as unacceptable evidence. A more detailed discussion of potential biases contained in the literature and their management during the review process is presented later in this article.

Systematizing

Systematizing a literature search and review refers to organizing the information reported in the accumulated studies and guided by the evidence model in a manner that allows for clear interpretation and summarization of the accumulated work. The organizational system used by the ASA uses a spreadsheet workbook approach, with columns dedicated to information pertaining to study design, number of cases, procedures, specifics about the interventions or treatments, outcomes, and comments pertaining to the measures, comparisons or study design. Labeled a “data extraction workbook,” this type of workbook typically contains a minimum of three spreadsheets, including a database tab listing the accepted articles with extracted data, a second spreadsheet tab containing articles reviewed and rejected (with comments describing and coding reasons for rejection), and a third spreadsheet containing the full list of articles reviewed. An excerpt from a data extraction workbook is shown in table 3.

Table 3.

Excerpt from a Data Extraction Workbook: Anesthetic Care for Labor and Vaginal Delivery

Excerpt from a Data Extraction Workbook: Anesthetic Care for Labor and Vaginal Delivery
Excerpt from a Data Extraction Workbook: Anesthetic Care for Labor and Vaginal Delivery

The data extraction workbook is a transparent and compact means of summarizing the body of literature in an organized fashion, as well as reporting detailed information about each of the individual studies. Most important, it provides the basic organizational structure for every practice parameter and guides the narrative literature review in the text of the document. Other software is also used for data collection and analysis and then entered into the workbook for summary purposes. Evidence tables derived from supportive software or from the workbook are often added to highlight subsets of findings or to prepare suitable data for meta-analysis.

Literature Summarization: The ASA Literature Classification System

The ASA method of literature classification is simple and straightforward, based first on research design, then on study replication, and finally on the statistical information reported. This system was designed for the purpose of providing an unambiguous structure for reporting the accumulated findings.

Because research design is the primary focus for evaluating scientific studies, the ASA system makes a clear distinction between causal and observational evidentiary findings by dividing the literature into two major design categories: (1) randomized controlled trials and (2) observational studies or case reports. This division is also of importance in the management of bias that may unintentionally influence research findings, with randomized controlled trials being the least susceptible to bias. Category designations are used rather than levels of quality designations because the separation of evidentiary findings is determined by research design. This is an important distinction because “levels” of quality can vary for randomized controlled trials, as well as for nonrandomized comparisons and observational studies.

Randomized controlled trials comprise the first-tier designation, described as category A studies. The accumulated category A studies are further divided into three “levels” based on the number of replicated randomized controlled trials and then reported in this manner in the practice parameter. For category A, level 1, the accumulation of studies include a sufficient number of randomized controlled trials for the methodologists to conduct meta-analysis. (For this category, the “Rule of Five” is applied to randomized controlled trials to determine the minimum number of studies to be eligible for meta-analysis. The rule is sometimes used in statistics to represent a minimum sample size of 10 observations per variable to be valid.) For level 2, the accumulated studies include multiple randomized controlled trials, but the total number is not sufficient to conduct a viable meta-analysis for the purpose of the practice parameter. For level 3, only a single acceptable randomized controlled trial was located and reviewed, and findings for the study are reported directly in the document.

Observational studies with a category B designation consist of nonrandomized comparisons, studies without comparison groups, and case series or case reports. Studies with this designation are the next available source of evidence when randomized controlled trials are unavailable or not feasible to conduct. Studies with this designation often provide important information that a randomized controlled trial does not typically examine, such as incidence data or findings from interventions or treatments that cannot be ethically examined using the randomized controlled trial. In addition, when the accumulated randomized controlled trials do not provide information on certain outcomes of interest, these second-tier studies may become extremely valuable and offer the only literature-based information available for a particular intervention.

Four levels are contained within category B, also based on research design and paired with associated statistical findings, and then reported in this manner in the practice parameter. For category B, level 1, the literature contains nonrandomized comparisons (e.g., quasiexperimental, cohort [prospective or retrospective], or case-control research designs) with comparative statistics between clinical interventions for a specified clinical outcome. Level 2 literature contains non–group-comparative observational studies with associative statistics (e.g., correlation, sensitivity, and specificity). Level 3 literature contains noncomparative observational studies with descriptive statistics (e.g., frequencies, percentages), and level 4 literature contains case reports.1

In the text of all practice parameters, a “best available evidence” approach is used to report literature findings. For example, meta-analytic findings of randomized controlled trials are listed first in the narrative summary, followed by randomized controlled trials without sufficient meta-analyses, followed by observational literature. The best available evidence approach is also extended to within category designations. For example, when sufficient numbers of double-blind randomized controlled trials are available, a separate meta-analysis is conducted for only those articles and reported first as the best available evidence. When sufficient numbers of double-blind studies are not available, any blinded randomized controlled trial is accepted and then nonblinded, followed by randomized controlled trials without an accompanying meta-analysis. Observational studies are reported in order by type of statistical findings: first comparative, then associational, then descriptive, and finally, case series and case reports with no statistics. After reporting the category and level of findings, each report includes a directional designation of benefit, harm, or equivocality associated with the intervention.

A designation of insufficient literature is reported when scientific studies are either unavailable (i.e., no pertinent studies found) or inadequate. Studies are considered inadequate when a clear interpretation of the findings cannot be obtained because of methodological concerns (e.g., confounding of study design or implementation) or the study does not meet the criteria for content as defined in the evidence model.

Management of Systematic Bias

The potential for bias is ever-present throughout the course of practice parameter development, from conceptualization through drafting of the recommendations. An important focus of attention during this process is to recognize and identify studies with systematic biases that threaten the integrity of the literature findings. The majority of biases found during this part of the process arise either during the literature search, during review of the individual research articles, or in the methods used when combining literature for analysis and evaluation.

Bias during the literature search may result when articles have been obtained from a selective search, where they are specifically picked by the practice parameter panel to support a predetermined viewpoint without attending to studies with alternative findings. Such article selection bias can also arise when editorials, letters, or white papers are used as sources of evidence, because these types of articles may be written with the purpose of promoting a point of view. This type of bias can also occur when the search is not comprehensive, risking the selection of a nonrepresentative sample of studies from the literature. By predefining inclusion and exclusion criteria, conducting independent searches, and using multiple database searches, as well as citations contributed by the task force, participating organizations, and inclusion of references contained in the articles reviewed, much of this bias can be avoided.10,11 

Bias in individual research articles can sometimes be difficult to identify. The reviewers themselves may have unintentional biases toward certain authors or journals or in the interpretation of certain types of findings. Some reviewer bias can be counteracted by having more than one individual review the article, using independent reviewers and a mix of methodologists/statisticians and clinicians to conduct the reviews. The ASA conducts a formal reliability assessment to ascertain whether such bias has been introduced into the review process.10  However, even with multiple reviewers, bias can be a risk, particularly when subjective rating systems are applied to judge the quality of an article as opposed to the use of designated research design categorizations. Some literature quality rating systems have been shown to have poor internal consistency and low reliability ratings among reviewers.12 

Biases associated with the design and analysis of research articles are numerous. The Agency for Healthcare Research and Quality, U.S. Department of Health and Human Services, lists several, including selection bias or confounding, performance bias, detection bias or confounding, attrition bias, and other potential confounders.13  They also suggest that when observational literature is included in systematic reviews, an expanded critical appraisal of confounding is needed to properly evaluate the benefits or harms of interventions.

Literature findings can also be at risk for interpretive or reporting bias, referring to the study authors either emphasizing or downplaying a particular finding. For example, a frequency or percentage finding can easily be presented in a manner that either supports or refutes the efficacy of an intervention. Observational studies presenting correlational or regression data can be presented or interpreted in such a way as to imply causation, even when the author specifically denies such a relationship.

Some literature may contain analytical bias, referring to data analysis findings that may incorrectly support a particular intervention-outcome pair. For example, hypoxemia may be defined by the investigator as oxygen saturation of less than 90%, and severe hypoxemia as oxygen saturation of less than 85%. The reported study results may not be clear whether the less than 85% data are reported separately or are included with the less than 90% data. Because less than 85% is also less than 90%, interpretation of findings may be difficult without a clear distinction. Multiple measurements over time can also lead to confounding when dropouts from mortality or other factors are not appropriately considered in the analysis.14 

Bias in the methods used when combining literature for analysis include the use of aggregated findings from external sources; the use of study designs that do not sufficiently replicate patient characteristics, interventions, or outcomes; bias in selection of outcomes to report, and bias in the selection of studies to combine for meta-analysis.

Reliance on external sources for aggregating and reporting literature can introduce bias because of the inclusion of studies that do not qualify as acceptable inclusion criteria by the practice parameter’s evidence model. The combined study findings would then contain data that would not be representative of the evidence model.

When combining studies for meta-analysis, bias can occur when disparate (as opposed to common) comparison groups, treatments, or outcomes are used. In this case, the evidence model must be sufficiently specific to avoid nonrepresentative findings. Exclusive use of randomized controlled trials in the analysis will minimize potential confounding and other biases that may be inherent in observational literature, such as the overestimation of treatment effects or potential intragroup noncomparability. Although selection of higher quality observational studies may reduce some bias,15  attributing efficacy to an intervention using these studies is an unacceptable risk for a task force charged with providing patient safety recommendations. Some bias is also mitigated by ASA methodologists using more stringent design criteria for combining studies and for statistical significance, recognizing the impact of large N values when conducting meta-analyses.

The selective reporting of outcomes that are thought by the reviewer to be the most important may invite bias by neglecting other outcomes that have a bearing on the benefits or harms associated with an intervention. Full reporting of all outcomes for each intervention using a “best available evidence” approach will help mitigate this source of bias. This approach extends to randomized controlled trial findings as well, whereby first consideration is given to randomized controlled trials that use proper blinding and patient or treatment allocation to avoid overestimation of treatment effects.16  A summary of potential sources of bias and how the ASA methodology acts to avoid or mitigate the impact of bias can be found in table 4.

Table 4.

Potential Sources of Bias during the Evaluation of Scientific Literature

Potential Sources of Bias during the Evaluation of Scientific Literature
Potential Sources of Bias during the Evaluation of Scientific Literature

Consensus-based Evidence and Summarization

When developing a clinical practice parameter, the task force must consider the necessity of adapting recommendations prepared with scientific guidance to the specificities of facilities, patients, practitioners, and other care staff if they are to ensure the implementation and appropriation of their document. Therefore, to become a major influence on the provision of quality health care, practice parameters must balance scientific rigor with pragmatism.17  To accomplish this, it is essential that the task force obtain input from identified experts, as well as from a broad swath of the community of practitioners who directly provide the type of patient care addressed by the practice parameter.

To obtain such professional input, opinion surveys are designed that will collect information on the proposed recommendations and on the feasibility and practicality of implementing the practice parameter. These surveys provide a direct link to best practice opinions from the community of experts, as well as from those who are making daily clinical decisions on behalf of their patients. The survey findings also provide a mechanism to assess gaps in knowledge about practice within a specialty, as well as to highlight differences in practice among members of different medical specialties.

To obtain verification of the proposed recommendations, the ASA uses a survey that simply lists the draft recommendations derived from literature findings and asks respondents whether they agree or disagree with each as stated in the practice parameter. Responses are recorded using a five-point scale ranging from “strongly agree” to “strongly disagree,” with median scores representing the summary responses. This survey is distributed first to individuals designated as experts on the topic (typically 50 to 250 individuals per practice parameter), followed by surveys sent to a random selection of the society’s members. When a practice parameter is prepared in collaboration with other professional medical organizations, these organizations may choose to distribute one of these surveys to a selection of their members. Identical surveys are distributed to all participants, and survey findings are then summarized separately for each group of respondents. Findings are presented both in the narrative text of the document and in an appendix titled “Methods and Analyses.” An example of reported survey findings is shown in table 5.

Table 5.

Excerpt from a Consultant Survey Table

Excerpt from a Consultant Survey Table
Excerpt from a Consultant Survey Table

Once results from the “recommendation” surveys are evaluated and incorporated into the document, the practice parameter is revised if needed. When the near-final draft is complete, it is made available to the designated experts accompanied by another type of survey called a “feasibility” survey. This survey is designed to obtain opinions about how implementation of the practice parameter is expected to affect practice, including questions that ask how the respondent’s practice might change, including time, equipment, and cost. An example of a feasibility survey is shown in table 6.

Table 6.

Example of a Consultant Feasibility Survey

Example of a Consultant Feasibility Survey
Example of a Consultant Feasibility Survey

Opinions obtained from less formal sources are obtained and considered by the task forces. Multiple open forums are held at major national or international professional medical meetings, and internet-based comments, letters, and editorials are all collected and discussed during the formulation of the practice parameter. This opinion-based evidence (e.g., survey data, open forum testimony, internet-based comments, letters, and editorials) is intended to address the appropriateness and inclusiveness of proposed recommendations relevant to each topic and is considered in the formulation of recommendations. When warranted, the task force may add educational information or cautionary notes based on the accumulated information.

When evidence from the various sources is accumulated, the strengths and weaknesses obtained from each source is evaluated to identify patterns that may emerge. Table 7 shows an example of a simple checklist for summarizing the accumulated literature for each proposed recommendation. When evidentiary patterns are consistent, the task force will have strong supportive evidence, but when the patterns are mixed, the task force will need to be more circumspect in their support for an intervention. By examining patterns from all accumulated evidence, the task force can proceed with finalizing their recommendation and have confidence in their decisions. All sources of evidence for each intervention can also be easily summarized into one color-coded illustration to assist the task force in determining the content and strength of their recommendation.9 

Table 7.

Evidence Linkage Checklist: Literature Findings for Intervention (List Number of Individual Studies in Each Category)Intervention (list) __________________________________________________________Outcome (list) ____________________________________________________________

Evidence Linkage Checklist: Literature Findings for Intervention (List Number of Individual Studies in Each Category)Intervention (list) __________________________________________________________Outcome (list) ____________________________________________________________
Evidence Linkage Checklist: Literature Findings for Intervention (List Number of Individual Studies in Each Category)Intervention (list) __________________________________________________________Outcome (list) ____________________________________________________________

For each evidence linkage, scientific and survey findings are reported separately and precede the recommendation. Recommendations are clear and concise, and in recent years a declarative approach has been adopted that categorizes recommendations into one of three areas: (1) perform the intervention, (2) you may perform the intervention depending on the case and clinical circumstances, or (3) avoid the intervention or activity. To avoid confusion as to what extent a recommendation is to be followed (and to avoid distraction from the actual recommendation), a designated score or grade is not included as part of the recommendation. Instead, recommendations are either clearly specified and/or the “you may perform” recommendation is applied, with explanatory footnotes where needed.

As with most evidence-based documents, clinical recommendations are based primarily on scientific findings, and when science, survey, and other opinions match, a strong recommendation can be made. In some cases, the recommendation must be made without strong evidentiary support from the literature. For example, a strong recommendation to “perform a medical records review and physical examination” typically does not have direct randomized controlled trial or even quasiexperimental evidence (although evidentiary findings of associations between patient physical condition and outcome may be referred to). The strong recommendation in this case refers to the task force recognition that the activity or intervention addressed by the recommendation has acceptance in the medical community as a vital part of practice. Occasionally scientific support is completely lacking. In other cases there may be strong scientific findings but the intervention is impractical or not feasible to implement (e.g., cumbersome monitoring devices or extremely expensive drugs or equipment). With the use of survey information, a task force can appropriately prepare and modify recommendations.

When considering how to report the evidence and recommendations, the task force needs to determine whether the entire body of evidence is strongly supported by scientific findings or whether the balance of evidence between science and opinion is more dependent upon opinion. In 1998, the ASA authorized the division of practice parameters into two types of document, the “practice guideline” and the “practice advisory,” based on the availability and quality of scientific evidence. Therefore, when there is a paucity of causal scientific evidence (i.e., randomized controlled trials) available, the task force will elect to prepare a practice advisory. The methodology and process used in the development of an advisory is identical to the guideline, but evidence for combining randomized controlled trials (i.e., meta-analysis) is unavailable. The ASA Policy Statement on Practice Parameters identifies practice guidelines as containing recommendations that are “supported by meta-analyses of findings from multiple clinical trials,” whereas practice advisories are supported by a “descriptive summary of the available literature where there is not a sufficient number of adequately controlled studies to permit meta-analysis.”3 

When a practice parameter is complete and all task force members have consented to the final product, it will go through a final review and vetting process by ASA governing bodies. Each document is submitted both to the Board of Directors and the House of Delegates, and the Committee on Professional Affairs Reference Committee will hold hearings at the annual meeting that include the practice parameters of the ASA Committee on Standards and Practice Parameters. At the hearings, attendees have the opportunity to testify in support of approval or disapproval by the House. In rare cases where the House does not approve a practice parameter, the Committee on Standards and Practice Parameters may be directed to submit a revised practice parameter the following year. If approved, the document becomes an official ASA document and is published in Anesthesiology.

Objective assessment of the methodology and processes used for practice parameters requires transparency and diligence. The ASA has devoted ongoing attention to improving the quality of its practice parameters and of the development processes, and since publication of the Institute of Medicine’s Standards for Developing Trustworthy Clinical Practice Guidelines in 2011,18  has often referred to these standards to evaluate its methodology and to obtain guidance for continued improvement.

For several years, the ASA has referred to the Agency for Healthcare Research and Quality National Guideline Clearinghouse as a source of external validation of the process. The National Guideline Clearinghouse reviewed and evaluated the extent to which practice parameters adhered to the Institute of Medicine Standards, and if acceptable, for eventual posting on the National Guideline Clearinghouse website. For many years, the National Guideline Clearinghouse website was regularly viewed by clinicians, scholars, and the general public. (Funding for the National Guideline Clearinghouse ended in June of 2018, and the agency was discontinued.) In recent years, the agency provided a National Guideline Clearinghouse Extent Adherence to Trustworthy Standards (NEATS) assessment for each practice parameter. This assessment rated how carefully a published guideline adhered to the Institute of Medicine Standards by rating them on a five-point scale ranging from 1 = poor to 5 = excellent. Their ratings for ASA practice parameters were consistently a 4 or 5 on areas of methodology such as the use of a systematic review of evidence for the search strategy, study selection, and synthesis of evidence, as well as for the evidence foundations of the quality or strength of evidence, benefits and harms of recommendations, and the evidence summary supporting recommendations. These ratings were also consistently high for the “specific and unambiguous articulation of recommendations.” Other nonmethodology areas of compliance where the ASA was rated lower included the disclosure and management of financial conflicts of interest, guideline development group composition, patient and public perspectives, external review, rating the strength of recommendations, and updating the documents.

On the basis of these ratings and input from other sources, the ASA has taken steps to improve compliance and transparency. For example, all practice parameters now clearly report the use of a “disclosure and management of financial conflicts of interest” form that must be completed and on file at the ASA central office before a task force member may participate. After receipt of the completed conflicts of interest form, it is reviewed by the task force chair, and those with real or perceived conflicts of interest are excluded from participation in this task force. Although the policy had been in place before the NEATS assessments, it was not fully reported in the published practice parameters. Reporting was also added to describe the ASA’s 5-yr update policy, which has been in place for many years but not previously reported in the documents.

In other areas (e.g., patient and public perspectives and external review ratings), more work needs to be done to improve, particularly at the committee level. For example, to fulfill the “patient and public perspectives” requirements, the ASA would need to solicit public sector agencies or patients for direct input. The external review area may also need work, although the draft documents are posted on the public side of the ASA website for several months before finalizing, and open forums are held at national meetings (available to the public) to present the recommendations and to ask for feedback. As previously mentioned, the task forces typically receive input from lay people, medical professionals, and other professional medical organizations.

An indirect source of validation lies in the interest shown by other medical specialty organizations, who have regularly endorsed or fully participated as co-sponsors of these documents since 1995.19–30  (See box 2 for a summary of co-sponsors.) This interest is increasing. In 2017, five organizations co-sponsored the “Practice Guidelines for Moderate Procedural Sedation and Analgesia 2018,”29  and in 2018, four organizations either co-sponsored, endorsed, or provided statements of support for the “Practice Advisory for Perioperative Visual Loss Associated with Spine Surgery 2019.”30  Future practice parameters may include participation by international organizations, further expanding the acceptance and validation of the ASA methodology and process.

Indirect validation of the value of ASA practice parameters for clinical practice is shown by the frequency of journal citations and web views. Two ASA practice guidelines have historically been among the most viewed articles in the journal Anesthesiology.6,8,19,20  From January of 2015 through December of 2017, ASA practice guidelines were consistently among the top 10 accessed articles on the journal’s website (according to a verbal personal communication from the managing editor of Anesthesiology, June 2018). In addition to their contributions to clinical practice, these documents are important communication and training tools, forming the basis for workshops, clinical forums, refresher courses, and other educational endeavors.

Final validation of the value of these documents to ASA members and other medical specialties is shown by patient-safety gains, as well as in defense against malpractice claims. The Anesthesia Closed Claims Project, funded by the Anesthesia Quality Institute, has seen many claims for ulnar neuropathy, postoperative visual loss, and claims associated with unexpected difficult intubations successfully defended using the ASA Practice Parameters.31 

The validation, defense, and strength of these documents lies in the intent of the society to provide guidance without requiring that clinicians precisely adhere to the recommendations. Rather, they are intended to provide preferred clinical interventions within which each practitioner can make individual treatment decisions that are suited to the patient or circumstances—note that all practice parameters begin with a statement indicating that “recommendations may be adopted, modified, or rejected according to clinical needs and constraints and are not intended to be standards or absolute requirements, or to replace local institutional policies.”

Box 2. Supporting Organizations (Co-sponsorships, Endorsements, and Statements of Participation or Support)

1995  American Society for Gastrointestinal Endoscopy (Endorsement)19

2001  American College of Radiology (Endorsement)20

    American Association of Oral and Maxillofacial Surgeons (Endorsement)20

    American Society for Gastrointestinal Endoscopy (Endorsement)20

    North American Neuro-Ophthalmology Society (Endorsement)21

    North American Spine Society (Endorsement)21

2004  American Heart Association (Endorsement)22

    Society of Cardiovascular Anesthesiologists (Endorsement)22

2005  American Academy of Sleep Medicine (Endorsement)23

    American Academy of Otolaryngology-Head and Neck Surgery (Endorsement)23

    American Academy of Pediatrics (Affirmation of Value)23

    North American Neuro-Ophthalmology Society (Endorsement)24

    North American Spine Society (Statement of Support)24

2007  American Academy of Otolaryngology–Head and Neck Surgery (Endorsement)25

2009  American Society of Regional Anesthesia and Pain Medicine (Co-sponsor)26

2011  Society of Cardiovascular Anesthesiologists (Endorsement)27

    Society of Critical Care Anesthesiologists (Endorsement)27

    Society of Pediatric Anesthesia (Endorsement)27

2014  Society of Cardiovascular Anesthesia (Endorsement)28

    Society for Obstetric Anesthesia and Perinatology (Endorsement)28

    Society of Critical Care Anesthesiologists (Endorsement)28

2017  American Association of Oral and Maxillofacial Surgeons (Co-sponsor)29

    American College of Radiology (Co-sponsor)29

    American Dental Association (Co-sponsor)29

    American Society of Dentist Anesthesiologists (Co-sponsor)29

    Society of Interventional Radiology (Co-sponsor)29

2018  North American Neuro-Ophthalmology Society (Co-sponsor)30

    Society for Neuroscience in Anesthesiology and Critical Care (Co-sponsor)30

    American Association of Neurological Surgeons/Congress of Neurological Surgeons Joint

    Section on Disorders of the Spine and Peripheral Nerves (Affirms the educational benefit of the document)30

    North American Spine Society (Contributor)30

The primary goal of the ASA practice parameter is to use rigorous and robust research techniques in the evaluation of existing evidence in the medical literature and clinical practice as a means to identify, disseminate, and implement best clinical practices. Physicians typically do not have the time or resources to perform exhaustive systematic literature searches and meta-analyses to remain contemporary with new evidence and changes in technology. ASA practice parameters serve to integrate new evidence and changing technology with clinical experience to provide explicit guidance on best practices to the clinician.

The contributions made to clinical practice by ASA practice parameters have been substantial since the first publications in 1992; the ASA has continued to develop ever-improving practice parameters, producing 15 new practice guidelines, 8 practice advisories, and 29 updates or revisions. Practice parameters are generally scheduled to be updated every 5 yr. An “update” consists of adding new literature that does not contain new findings. In this case, recommendations from the previous practice parameter remain unchanged. When new or different evidence is found, or if a new intervention is added, a revision is required. The methodology and process for a revision is identical to that of a new practice parameter. As ASA practice parameters continue to evolve over time, clinicians, methodologists, and other professionals involved in the development of these documents regularly seek improvements, both in the efficiency of the process (e.g., improved search methods and improved software for literature reviews and analysis, and more efficient communication procedures) and in improvements in member participation. We anticipate greater incorporation of perspectives from patients, as well as relevant public and professional medical organizations. As large perioperative databases (e.g., Multicenter Perioperative Outcomes Group, National Anesthesia Clinical Outcomes Registry) continue to evolve, we hope to be able to query those databases to provide highly specific experiential data that may be incorporated as a new form of clinical evidence. New practice parameters are being considered that will expand our knowledge and focus in areas of practice such as deep sedation, residual neuromuscular blocking drug-induced muscle weakness, intraoperative mechanical ventilation monitoring, and geriatric anesthesia. Consideration is being given to the idea of using practice parameters to provide guidance for developing new performance measures and to focus on less comprehensive practice parameters (i.e., a few interventions instead of a broad-based approach). In the future, ASA practice parameters will continue to offer clear and efficient guidance for the implementation of quality healthcare services by anesthesiologists and all healthcare professionals. (See box 3 for more information on the ASA process.)

Box 3. Where to Find More Information on This Topic
  • Apfelbaum JL, Connis RT: American Society of Anesthesiologists evidence-based practice parameters, Faust’s Anesthesiology Review, 5th edition, Chapter 226. Philadelphia, Elsevier Saunders, 2020 (in press)

  • Apfelbaum JL, Connis RT, Nickinovich DG. 2012 Emery A. Rovenstine Memorial Lecture: The genesis, development, and future of the American Society of Anesthesiologists evidence-based practice parameters. Anesthesiology. 2013; 118:767–8

  • Anesthesiology website: http://anesthesiology.pubs.asahq.org/practice.aspx

  • Connis RT, Nickinovich DG, Caplan RA, Apfelbaum JL: Evaluation and classification of evidence for the ASA clinical practice guidelines, Miller’s Anesthesia 8th edition. Edited by Miller RD. Philadelphia, Elsevier Saunders, 2015, pp 3257–70

  • Connis RT, Nickinovich DG, Caplan RA, Arens JF: The development of evidence-based clinical practice guidelines: Integrating medical science and practice. International Journal of Technology Assessment in Health Care 2000; 16(4):1003–12

  • Connis RT, Caplan RA, Nickinovich DG: Evaluating the quality of anesthesia literature for the development of evidence-based clinical practice guidelines. ISA Today 1998; 31(1):11–3

  • Domino K, London MJ, Tung A: While imperfect, anesthesia guidelines help busy clinicians. The Operating Theatre Journal, July 27, 2017

  • Nickinovich DG, Connis RT, Caplan RA, Arens JF, Apfelbaum JL: Evidence-based practice parameters – The approach of American Society of Anesthesiologists, Evidence-Based Practice of Anesthesiology 3rd edition. Edited by Fleisher LA. Philadelphia, Elsevier Saunders, 2013, pp 2–6

  • Nickinovich DG, Connis RT, Caplan RA, Arens JF: Introduction: guidelines and advisory development. Anesthesiology Clinics of North America 2004; 22:1–12

The authors acknowledge the support of the American Society of Anesthesiologists (Schaumburg, Illinois) and the participation of the hundreds of physician volunteers who have provided encouragement and input to the practice parameter development process.

Support was provided solely from institutional and/or departmental sources.

The authors declare no competing interests.

1.
Institute of Medicine: Committee on Quality of Health Care in America: To Err Is Human: Building a Safer Health System
. Edited by
Kohn
LT
,
Corrigan
JM
,
Donaldson
MS
.
Washington, DC
,
The National Academies Press
,
2000
, pp
34
2.
Apfelbaum
JL
,
Connis
RT
,
Nickinovich
DG
:
2012 Emery A. Rovenstine Memorial Lecture: The genesis, development, and future of the American Society of Anesthesiologists evidence-based practice parameters.
Anesthesiology
2013
;
118
:
767
8
3.
American Society of Anesthesiologists:
:
Policy statement on practice parameters.
ASA Standards, Guidelines and Statements, American Society of Anesthesiologists. Last Amended: October 16, 2013 (original approval: October 17, 2007). Available at: http://www.asahq.org/quality-and-practice-management/standards-guidelines-and-related-resources/policy-statement-on-practice-parameters. Accessed September 14, 2018
4.
Epstein
BS
:
The American Society of Anesthesiologist’s efforts in developing guidelines for sedation and analgesia for nonanesthesiologists: The 40th Rovenstine Lecture.
Anesthesiology
2003
;
98
:
1261
8
5.
Institute of Medicine: Committee to Advise the Public Health Service on Clinical Practice Guidelines: Clinical Practice Guidelines: Directions for a New Program
. Edited by
Field
MJ
,
Lohr
KN
.
Washington, DC
,
The National Academies Press
,
1990
, pp
2
5
6.
Caplan
RA
,
Benumor
JL
,
Berry
FA
,
Blitt
CD
,
Bode
RH
,
Cheney
FW
,
Connis
RT
,
Guidry
OF
,
Ovassapian
A
:
Practice guidelines for management of the difficult airway.
Anesthesiology
1993
;
78
:
597
602
7.
Roizen
MF
,
Berger
DL
,
Gabel
RA
,
Gerson
J
,
Mark
JB
,
Parks
RI
Jr.
,
Paulus
DA
,
Smith
JS
,
Woolf
SH
:
Practice guidelines for pulmonary artery catheterization: A report by the American Society of Anesthesiologists Task Force on Pulmonary Artery Catheterization.
Anesthesiology
1993
;
78
:
380
9
8.
Caplan
RA
,
Benumor
JL
,
Berry
FA
,
Blitt
CD
,
Bode
RH
,
Cheney
FW
,
Connis
RT
,
Guidry
OF
,
Nickinovich
DG
,
Ovassapian
A
:
Practice guidelines for management of the difficult airway: An updated report.
Anesthesiology
2003
;
98
:
1269
77
9.
Nickinovich
DG
,
Connis
RT
,
Caplan
RA
,
Arens
JF
,
Apfelbaum
JL
:
Evidence-based practice parameters: The approach of the American Society of Anesthesiologists
in
Evidence-Based Practice of Anesthesiology
, 3rd edition. Edited by
Fleisher
LA
.
Philadelphia PA
,
Elsevier Saunders
,
2013
, pp
2
6
10.
Connis
RT
,
Nickinovich
DG
,
Caplan
RA
,
Apfelbaum
JL
:
Evaluation and classification of evidence for the ASA clinical practice guidelines
in
Miller’s Anesthesia
, 8th edition. Edited by
Miller
RD
.
Philadelphia, PA
,
Elsevier Saunders
,
2015
, pp
3257
70
11.
Burgers
JS
,
Cluzeau
FA
,
Hanna
SE
,
Hunt
C
,
Grol
R
:
Characteristics of high-quality guidelines: Evaluation of 86 clinical guidelines developed in ten European countries and Canada.
Int J Technol Assess Health Care
2003
;
19
:
148
57
12.
Kavanagh
BP
:
The GRADE system for rating clinical guidelines.
PLoS Med
2009
;
6
:
e1000094
13.
Viswanathan
M
,
Berkman
ND
,
Dryden
DM
,
Hartling
L
:
Assessing risk of bias and confounding in observational sstudies of interventions or exposures: Further development of the RTI Item Bank
in
Methods Research Report
.
Rockville, MD
,
Agency for Healthcare Research and Quality
,
2013
, pp
1
5
14.
Connis
RT
,
Evans
RL
,
Hendricks
RD
:
Meta-analysis with longitudinal studies: Controlling for analytical bias.
Psychol Rep
1996
;
79
:
1383
6
15.
MacLehose
RR
,
Reeves
BC
,
Harvey
IM
,
Sheldon
TA
,
Russell
IT
,
Black
AM
:
A systematic review of comparisons of effect sizes derived from randomised and non-randomised studies.
Health Technol Assess
2000
;
4
:
1
154
16.
Schulz
KF
,
Chalmers
I
,
Hayes
RJ
,
Altman
DG
:
Empirical evidence of bias: Dimensions of methodological quality associated with estimates of treatment effects in controlled trials.
JAMA
1995
;
273
:
408
12
17.
Rosenfeld
RM
,
Shiffman
RN
,
Robertson
P
:
Clinical practice guideline development manual, 3rd edition: A quality-driven approach for translating evidence into action.
Otolaryngol Head Neck Surg
2013
;
148
:
S1
55
18.
Institute of Medicine of the National Academies
:
Clinical practice guidelines we can trust: Standards for developing trustworthy clinical practice guidelines (CPGs)
.
Washington DC
,
The National Academies Press
,
2011
, pp
1
2
19.
Gross
JB
,
Bailey
PL
,
Caplan
RA
,
Connis
RT
,
Coté
CJ
,
Davis
FG
,
Epstein
BS
,
Kapur
PA
,
Zerwas
JM
,
Zuccaro
G
Jr
:
Practice guidelines for sedation and analgesia by non-anesthesiologists.
Anesthesiology
1996
;
84
:
459
71
20.
Gross
JB
,
Bailey
PL
,
Caplan
RA
,
Connis
RT
,
Coté
CJ
,
Davis
FG
,
Epstein
BS
,
Kapur
PA
,
Zerwas
JM
,
Zuccaro
G
Jr
:
Practice guidelines for sedation and analgesia by non-anesthesiologists: An updated report by the American Society of Anesthesiologists Task Force on sedation and analgesia by non-anesthesiologists.
Anesthesiology
2002
;
96
:
1004
17
21.
Pasternak
LB
,
Arens
JF
,
Caplan
RA
,
Connis
RT
,
Fleisher
LA
,
Flowerdew
R
,
Gold
BS
,
Mayhew
RF
,
Nickinovich
DG
,
Rice
LJ
,
Roizen
MF
,
Twersky
RS
:
Practice guidelines for preanesthesia evaluation: A report by the American Society of Anesthesiologists Task Force on Preanesthesia Evaluation.
Anesthesiology
2002
;
96
:
485
96
22.
Zaidan
JR
,
Atlee
JL
,
Belott
P
,
Briesacher
KS
,
Connis
RT
,
Gallagher
JD
,
Hayes
D
,
Hershey
JE
,
Kay
N
,
Nickinovich
DG
,
Rozner
MA
,
Trankina
MF
:
Practice advisory for the perioperative management of patients with cardiac rhythm management devices: Pacemakers and implantable cardioverter-defibrillators.
Anesthesiology
2005
;
103
:
186
98
23.
Gross
JB
,
Bachenberg
KL
,
Benumof
JL
,
Caplan
RA
,
Connis
RT
,
Coté
CJ
,
Nickinovich
DG
,
Prachand
V
,
Ward
DS
,
Weaver
EM
,
Ydens
L
,
Yu
S
:
Practice guidelines for the perioperative management of patients with obstructive sleep apnea.
Anesthesiology
2006
;
104
:
1081
93
24.
Warner
MA
,
Arens
JF
,
Connis
RT
,
Domino
KB
,
Lee
LA
,
Miller
N
,
Mirza
S
,
Newman
N
,
Nickinovich
DG
,
Roth
S
,
Savino
P
,
Weinstein
P
:
Practice advisory for perioperative visual loss associated with spine surgery: A report by the American Society of Anesthesiologists Task Force on Perioperative Blindness.
Anesthesiology
2006
;
104
:
1319
28
25.
Caplan
RA
,
Barker
SJ
,
Connis
RT
,
Cowles
C
,
de Richemond
AL
,
Ehrenwerth
J
,
Nickinovich
DG
,
Pritchard
D
,
Roberson
D
,
Wolf
GL
:
Practice advisory for the prevention and management of operating room fires: A report by the American Society of Anesthesiologists Task Force on Operating Room Fires.
Anesthesiology
2008
;
108
:
786
801
26.
Rosenquist
RW
,
Benzon
HT
,
Connis
RT
,
De Leon-Casasola
OA
,
Glass
DD
,
Korevaar
WC
,
Mekhail
NA
,
Merrill
DG
,
Nickinovich
DG
,
Rathmell
JP
,
Sang
CN
,
Simon
DL
:
Practice guidelines for chronic pain management: An updated report by the American Society of Anesthesiologists Task Force on chronic pain management and the American Society of Regional Anesthesia and Pain Medicine.
Anesthesiology
2010
;
112
:
810
33
27.
Apfelbaum
JL
,
Blitt
C
,
Caplan
RA
,
Connis
RT
,
Domino
KB
,
Fleisher
LA
,
Grant
S
,
Mark
JB
,
Morray
JP
,
Nickinovich
DG
,
Tung
A
:
Practice guidelines for central venous access: A report by the American Society of Anesthesiologists Task Force on Central Venous Access.
Anesthesiology
2012
;
116
:
539
73
28.
Apfelbaum
JL
,
Nuttall
GA
,
Connis
RT
,
Harrison
CR
,
Miller
RD
,
Nickinovich
DG
,
Nussmeier
NA
,
Rosenberg
AD
,
Shore-Lesserson
L
,
Sullivan
JT
:
Practice guidelines for perioperative blood management: An updated report by the American Society of Anesthesiologists Task Force on Perioperative Blood Management.
Anesthesiology
2015
;
122
:
241
75
29.
Apfelbaum
JL
,
Gross
JB
,
Connis
RT
,
Agarkar
M
,
Arnold
DE
,
Coté
CJ
,
Dutton
R
,
Madias
C
,
Nickinovich
DG
,
Schwartz
PF
,
Tom
JW
,
Towbin
R
,
Tung
A
;
RTI International–University of North Carolina at Chapel Hill Evidence-based Practice Center
:
Practice guidelines for moderate procedural sedation and analgesia 2018: A report by the American Society of Anesthesiologists Task Force on Moderate Procedural Sedation and Analgesia, the American Association of Oral and Maxillofacial Surgeons, American College of Radiology, American Dental Association, American Society of Dentist Anesthesiologists, and Society of Interventional Radiology.
Anesthesiology
2018
;
128
:
437
79
30.
Apfelbaum
JL
,
Roth
S
,
Rubin
D
,
Connis
RT
,
Agarkar
M
,
Arnold
PM
,
Dhall
SS
,
Domino
KB
,
Hoh
DJ
,
Hwang
SW
,
Lee
AG
,
Lee
LA
,
Miller
NR
,
Newman
NJ
,
Savino
PJ
,
Todd
MM
:
Practice advisory for perioperative visual loss associated with spine surgery 2019: An updated report by the American Society of Anesthesiologists Task Force on Perioperative Visual Loss, the North American Neuro-Ophthalmology Society, and the Society for Neuroscience in Anesthesiology and Critical Care.
Anesthesiology
2019
;
130
:
12
30
31.
Domino
K
,
London
MJ
,
Tung
A
:
While imperfect, anesthesia guidelines help busy clinicians.
Anesthesiology News
,
32.
Apfelbaum
JL
,
Horlocker
TT
,
Agarkar
M
,
Connis
RT
,
Hebl
JR
,
Nickinovich
DG
,
Palmer
CM
,
Rathmell
JP
,
Rosenquist
RW
,
Wu
CL
:
Practice advisory for the prevention, diagnosis, and management of infectious complications associated with neuraxial techniques: An updated report by the American Society of Anesthesiologists Task Force on Infectious Complications Associated with Neuraxial Techniques and the American Society of Regional Anesthesia and Pain Medicine.
Anesthesiology
2017
;
126
:
1585
601
33.
Apfelbaum
JL
,
Hawkins
JL
,
Agarkar
M
,
Bucklin
BA
,
Connis
RT
,
Gambling
DR
,
Mhyre
J
,
Nickinovich
DG
,
Sherman
H
,
Tsen
LC
,
Yaghmour
EA
:
Practice guidelines for obstetric anesthesia: An updated report by the American Society of Anesthesiologists Task Force on Obstetric Anesthesia and the Society for Obstetric Anesthesia and Perinatology.
Anesthesiology
2016
;
124
:
270
300