"Where is the knowledge we have lost in information?"-T. S. Eliot
LEGISLATORS and public policy officials have long stated that the rise in health-care costs must be controlled-but not at the expense of quality of care. Recent reform efforts have reduced health-care costs, but many questions remain regarding the quality of care that accompanies this cost reduction. Although the reform proposals differ, one consensus has emerged-the need to establish reliable, reproducible measures of health-care performance. To that end, the concept of health-care report cards has emerged. In its current use, a health-care report card refers to any effort that quantifies or qualifies specific indicators or criteria to establish comparative estimations of the value of health care. These indicators, for example, take the form of time to hospital discharge, cost of services, mortality rates, delivery of preventive services, and patient satisfaction. Key to understanding the role of health-care report cards is an appreciation of their comparative nature. Unlike outcome studies, which report the incidence of certain events, health-care report cards assign the equivalent of a grade to each area being studied, allowing comparisons to be made relative to a standard, to peers, or to competitors. In the last 8 yr, the use of report cards has dramatically changed the pace and process of health-care reform and affected providers, insurers, regulators, and patients. The publication of report cards that purport to reflect quality of health-care provider performance is rapidly becoming more common and more controversial. [2,3]There are those who believe that interest in providing such comparative reports is waning; however, accountability for the value of health-care services is increasingly being advocated.* Knowledge of the historical development of health-care report cards provides a framework for understanding their significance and limitations in the future of health-care delivery.
Although the concept of health-care report cards is seemingly new, its foundations can be seen in the Hammurabi Code of Babylon, the teachings of Hippocrates and Galen, and the charter of such guilds as the Royal College of Physicians. Each one speaks of a need to provide public accountability through professional standards. In the early 1900s, Ernest Codman advocated that patient report cards be kept to assess the efficacy of different treatment regimens. He further suggested that physician-specific data be made available to both the public and hospitals. Codman envisioned a medical climate in which patients could use that information to choose their personal physician, and hospitals could use it to determine staff positions and privileges. By 1990, the National Academy of Sciences and the Institute of Medicine report stated that health care was riddled by the “expensive overuse of invasive technology, underuse of inexpensive ‘caring services’ and the error-prone implementation of care that harms patients and wastes money.”The report advocated the implementation of programs to improve the quality of the decision making process and to increase cost efficiency, thereby providing accountability to health industry participants for the value of medical services provided.
The Joint Commission for the Accreditation of Healthcare Organizations (JCAHO) has already instituted such an effort to measure performance, a forerunner that provides a framework for current health-care report cards. In 1987, the JCAHO established its Agenda for Change and released its Comprehensive Accreditation Manual for Hospitals.** This manual emphasized performance results, not the capacity to perform. The means for achieving improvement was through “outcome-based flags,” or indicators, which heralded problems in particular areas of patient care. In 1988, the JCAHO embarked on an effort to create a national indicator-based performance measurement system. Standards for anesthesiology (Table 1), obstetrics, cardiology, and oncology were developed and are currently being implemented voluntarily. The JCAHO set 1996 as the date when participation in the performance indicator project would be mandatory for all accredited hospitals but has since revoked this decision. Instead, a gradual phase-in of specified indicators as a portion of and contingency for the JCAHO accreditation and review process is being implemented.
Report of the General Accounting Office
Because of questions and concerns over the generation of health-care report cards, the Department of Health and Human Services commissioned the General Accounting Office to study the implications, benefits, and limitations of the use of health-care report cards to better understand the veracity and comparability of the information and to define the ongoing role of such performance data.*** In 1994, the first and the most comprehensive attempt to systematically evaluate health-care report cards was released. This initial report enumerated several existing problems, stating that although “experts believe measures comparing health plan performance should be published, inaccurate and misleading data sources as well as the lack of agreed-upon indicators and formulae for calculating results might hinder the report cards' utility.”***
The General Accounting Office study further reported that in many instances report cards did not measure what they purported to measure. Dissemination of this information to a public uneducated in the intricacies of confounding factors and methods for risk adjustment could prove misleading.**** For example, despite the inability of research to demonstrate conclusively that, in nonanesthetic settings, board-certified physicians deliver better quality of care than non-board certified physicians, the percentage of board-certified physicians has nevertheless been used as one measure of the quality of a health plan. In five of the seven studies reviewed by the Office of Technology Assessment, physician board certification had little effect on clinical performance.***** This is in contrast to findings that demonstrate improved outcomes correlating to board certification of physician anesthesia providers. The General Accounting Office study suggested that, to maintain their accuracy and validity, report cards should be reviewed and verified by an external source. The General Accounting Office also found that report cards were being developed in a vacuum of cost. Without a demonstrated benefit (i.e., improved care at similar or lower cost), there could be little justification of their continued use.
Limitations of Health-care Report Cards
Although performance assessment efforts by state and federal agencies and private organizations have been underway for decades, historically, the results of these assessments have been kept confidential-except to those few individuals or organizations who were intimately involved in their development. There is great disagreement as to the type and amount of information that should be published. The four primary concerns invoked in this debate revolve around:(1) the quality of data sources, (2) risk stratification, (3) outcome industry “standards,” and (4) data and report standardization.
The perception exists that data sources are inaccurate and there are no well-defined measures of quality of care. Information used to evaluate performance is generally obtained from computerized administrative databases or medical records. Such sources contain incorrect, misleading, or incomplete data. The administrative databases were originally designed to facilitate accurate and timely payment to providers, with the added benefit of compiling demographic and clinical data related to patient encounters. These databases, however, contain many errors in the coding system of diagnoses, conditions, procedures, and treatments. The use of medical records may provide a good source of clinical information, but the cost associated with retrieval, the subjectivity of interpretation of test results, and the potential for overstatement of medical record notations to justify a hospital stay (or to satisfy insurance requirements) also limit their utility.
Although a number of risk adjustment systems are in use, there exists little proof that these systems are valid or reliable. Risk adjustment using regression models is a statistical method of “leveling the playing field” to account for baseline differences so that the influence of care can be quantified. Such methodologies must be developed to allow impartial comparisons of performance indicators. The incorporation of risk stratification must be made into ongoing studies of clinical outcome. Unlike the automotive or computer industries, which have specifications for quality production of services, the health-care industry must deal with more uncertainty and individuality in the form of patients and their diseases. Because it is difficult to separate anesthetic complications and outcome from surgical outcomes and patient disease, there are no “industry standards” for anesthetic care. As a result, report cards that can currently be published are a reflection of individual institutions and vary widely. Until clearly defined outcomes and standards are set, performance reports will have minimal comparative utility and validity.
Current Report Cards
All currently available health-care report cards purport to provide accountability to consumers interested in finding the best value in medical care. The scope and accuracy of existing report cards vary considerably, however.***The first national public disclosure of quality assessment information was made in 1987 when the Healthcare Financing Administration released its report on the observed and expected mortality rates in each hospital that provided coronary artery bypass graft (CABG) surgery to patients with Medicare. It defined clinical outcome by mortality rates and examined the mortality patterns of patients with Medicare in United States hospitals. The report included a list of Medicare-participating hospitals whose in-house mortality rate was either significantly higher or lower than the “expected” rate (which was calculated by adjusting national mortality figures for the prevalence of each of 89 variables in the hospitals). The report was hailed by consumer interest groups (as a useful tool in health-care purchasing decisions) and reviled by many health-care providers (as inadequate and misleading). Its publication was later suspended to prevent its misuse in consumer publications.
Currently, the federal government is funding efforts that delineate benchmarks for performance comparisons. Benchmarks can be defined as the best industry practices or processes (e.g., clinical, administrative) that result in optimal outcome. The Agency for Healthcare Policy and Research investigates the outcomes of health-care services via Patient Outcome Research Teams; their funding, for example, supports the tracking of events such as strokes, pneumonia, cataracts, back pain, and myocardial infarctions. Although in their nascent stage, by the very nature of their large-scale epidemiologic foundation and widespread applicability, the results of the Patient Outcome Research Teams are being used as a national standard for clinical performance; however, the prototype and, arguably, best-established report cards, including those released by the National Committee for Quality Assurance (NCQA), the New York State Cardiac Advisory Committee, and the Pennsylvania Healthcare Cost Containment Council (PHC4), deserve further analysis.******[11,12]
The National Committee for Quality Assurance
The NCQA was established to develop comparative reports documenting the performance of managed care organizations to determine the value of the health services provided. A nonprofit, watchdog organization, the NCQA represents the interests of purchasers (i.e., employers), consumers, health plan executives, and health-care organizations-not those of physician groups. It is currently recognized as the leader in the effort to track, measure, and report the value of services provided by the nation's managed care organizations. The NCQA was established in 1979 by the Group Health Association of America and the American Managed Care and Review Association and became an independent organization in 1990. It has grown rapidly since 1990 and now surveys and accredits nearly one half of the nation's > 600 health maintenance organizations. The organization's stated goals are threefold:(1) to find the best value for employers/purchasers, (2) to help identify gaps between expected and actual performance, and (3) to provide information on access, quality, and outcome in a public forum.******* In 1991, the NCQA released the first Health Plan Employer Data and Information Set (HEDIS), a tool for comparative measures of performance. This was followed by the release of HEDIS 2.0 in 1993, HEDIS 2.5 in 1995, and Medicaid HEDIS in 1996. HEDIS 3.0 will include comparative information on the cost of specific services.
HEDIS reports measure > 60 standardized indicators of access, financial stability, membership, and patient satisfaction in six categories: quality management and improvement efforts, physician credentials, member's rights and responsibilities, preventive health services, utilization management, and medical records. After voluntary review of a health-care organization, the NCQA assigns a level of accreditation corresponding to an organization's rating. Full 3-yr, 1-yr, or provisional accreditation is granted to health plans that meet specified NCQA standards. Accreditation status is then made available to health-care purchasers for use in health-care decision-making.
The NCQA release of HEDIS 2.0 in November 1993 came at a time when performance measurement was gaining momentum as a mechanism for conveying plan performance information to consumers, purchasers, and policy makers. Although the HEDIS format rapidly became an industry standard for evaluating individual plan performance and continuous quality improvement efforts, the limitations to their use became increasingly apparent. Therefore, the NCQA developed a list of areas for improvement, including data collection standardization, risk stratification, and data accuracy.*******
The State of New York Cardiac Advisory Committee
In 1989, the Department of Health of the state of New York began an effort to reduce mortality rates associated with CABG surgery through its Cardiac Advisory Committee. A prospective database was developed that included information on patient demographics, risk factors, and complications, categorized by hospital and surgeon. A health-care report card included data on risk-adjusted mortality and was provided regularly to individual hospitals and cardiac surgery training programs to evaluate their comparative levels of performance. In 1990, the Department of Health made public the 1989 data on expected and risk-adjusted mortality rates and the volume of CABG procedures at each hospital, with the intention of releasing comparable data each subsequent year. Litigation brought against the Department of Health, however, by the newspaper Long Island Newsday under the state's Freedom of Information law, forced the department to disclose both hospital-specific and surgeon-specific data that had been collected concomitantly. In December 1990, Long Island Newsday published this surgeon-specific information. As a result, the Cardiac Advisory Committee recommended that hospitals submit data to the Department of Health in a fashion that would make it impossible to identify specific physicians. Previous research had documented an inverse relationship between a surgeon's volume of CABG procedures and the operative mortality associated with such surgery. Intensive discussions between the Cardiac Advisory Committee and representatives of health-care providers led to a compromise in which operative mortality data were compiled only on surgeons who had performed an average of >or= to 200 cardiac operations per year in a single hospital in a 3-yr period. The Cardiac Advisory Committee's efforts are currently in their eighth year of publication.
The release to media of the these hospital- and surgeon-specific data in 1990 emphasized numerical ranking of hospital performance and led to the suspension of privileges or replacement of some surgeons by the hospital administration. One hospital even discontinued its CABG program while it implemented changes intended to improve the institution's numerical rankings. The news articles failed to highlight the program's goal-to promote efforts to improve quality of care-and did not mention that differences in the numerical rankings had neither statistical nor clinical significance.********
The risks of presenting information to a public uneducated in its uses were obvious. Concerns were raised by health-care providers that patients might overreact to such reports and avoid both hospitals and physicians with reputed high mortality rates. Questions were raised regarding whether some providers, in an effort to improve their numerical rankings, would avoid high-risk patients to change their practice profile. Ongoing efforts by the Cardiac Advisory Committee to ascertain the significance of these concerns have shown otherwise, however. Independent auditing of the data collection process found inconsistencies twice, in 1992 and 1993; some hospitals were found to have assigned patient risk factors that were unsubstantiated by the medical records. This miscoding resulted in patients who, on paper, appeared sicker than they were in reality, thus falsely elevating expected mortality rates and comparing favorably with their actual mortality rates. In addition, the process of refining risk factor code definitions led to difficulties in interpretation of the available data. For example, of the 14 categories of risk factors, 5 became more prevalent between the years 1989 and 1991 because each of these risk factors (renal failure, chronic obstructive pulmonary disease, unstable angina, left ventricular ejection fraction < 40%, and congestive heart failure) underwent significant redefinition. Since 1991, however, the prevalence of each risk factor has remained stable with the exception of congestive heart failure, for which the definition continues to be refined. 
Another concern-that hospitals might attempt to lower risk-adjusted mortality by avoiding high-risk patients-was also unfounded. Data showed that some hospitals that cared for the highest-risk patients had some of the lowest risk-adjusted mortality rates. In comparison, hospitals with the high risk-adjusted death rates often cared for patients at lower than average risk. Because of the fear that hospitals might decrease their risk by referring sicker patients to other hospitals, some investigators studied the number and pattern of referral of high-risk patients from upper New York state to the Cleveland Clinic. This study, however, revealed that (1) the change in practice patterns did not accurately reflect the timing and its effect of the release of New York's performance data, and (2) the overall expected mortality of the Cleveland Clinic and the state of New York changed little from 1990–1993. More recently, others have attempted to determine whether reports of outcome studies actually caused the observed decrease in cardiac mortality. They found that Massachusetts, a state that does not have any performance measure initiatives, had comparable decreases in cardiac mortality rates after CABG surgeries to those found in New York and northern New England. Further, mortality rates after CABG surgeries decreased comparably throughout the United States. In the absence of more conclusive evidence to the benefit of performance reporting, the authors stated that direct evaluations are needed to better characterize the efficacy of the ongoing statewide programs. 
The Pennsylvania Healthcare Cost Containment Council
On July 1, 1986, the Pennsylvania General Assembly unanimously passed Senate bill 293, which created a new independent state agency-the PHC4. The mission of the PHC4 was to promote cost containment by stimulating a competitive health-care market, resulting in the provision to group purchasers of consistent and accurate information about the cost and quality of care. A 21-member council composed of representatives from business, labor, insurance, consumers, government, and health-care providers was created to meet these mandates:(1) to collect data surrounding health-care cost and quality and to disseminate that information to the general public;(2) to study and formulate a plan to address the availability of health care to the uninsured; and (3) to review and recommend proposed legislation on the cost and medical effectiveness of additional health programs. To achieve these goals, the PHC4 requires that all health-care organizations and hospitals collect and disseminate specified data using Healthcare Financing Administration guidelines. The data are coded information that use the UB-82 (Uniform Billing) form, which includes demographic data, hospital charges, and diagnosis and procedure codes using the International Classification of Diseases, Ninth Revision, Clinical Modification specifications. These data are gathered at the hospital level using medical records and are then submitted quarterly to the council. The council also mandated the publication of provider information with risk-adjusted morbidity and mortality outcome measures that are easily understood by patients. Through contractual agreements, hospitals are required to use a commercial database (ATLAS Severity of Illness System; MediQual, Westborough, MA) to derive disease severity and morbidity information. This system classifies each patient's condition on admission and at specified times during the course of hospitalization using objective clinical findings, referred to as key clinical findings. Abstraction of clinical data from the medical record is conducted by hospital personnel, and the resultant measures-the admission severity score and the morbidity score-are submitted to the council for each inpatient admission. [17,18]Using these data, the PHC4 has released > 80 public reports since its inception, including the Hospital Financial Report, the Small Area Analysis, the Hospital Utilization and Financial Summary, the Consumer Guide to Coronary Artery Bypass Surgery, and its first and flagship publication, the Hospital Effectiveness Report. [19–23]Each report uses standardized data to provide comparative estimates of value throughout the state; however, critical peer review of the council's methodology has yet to be undertaken. In contrast, in a study of utility and acceptance, Schneider and Epstein randomly surveyed 50% of cardiovascular specialists in the state of Pennsylvania to determine whether they were aware of the Consumer Guide to Coronary Artery Bypass Grafting guide, and, if so, to determine their views on its usefulness, limitations, and influence on providers. Less than 10% of physicians discussed the ratings with their patients, and < 2% of cardiologists believed that the guide had any “significant impact” on their referral practice. They concluded that the guide had limited credibility with cardiovascular specialists in Pennsylvania, little influence on referral patterns, and possibly introduced a barrier to care for severely ill patients.
Implications for Departments of Anesthesia
Current reform began with efforts aimed at decreasing the overall cost of health care, with emphasis on providing services within a given budget. As premiums for services and the ability to decrease cost have leveled off, the spotlight of reform has shifted to quality of performance. It has therefore become important to understand how anesthesiologists define and quantify the value of services provided and how they can create anesthesia report cards. In view of the current trend toward outside agencies (e.g., governmental, payor, credentialing) to request performance measures from health-care organizations, it is likely that these requests will be made of individual anesthesia departments. In this context, it is increasingly important that institution-specific performance criteria relating to the practice of anesthesia are developed before such performance indicators are mandated by these licensing boards, government agencies, regulatory bodies, and insurance companies. With so little information available and no known current efforts by outside groups to create anesthesia report cards, the anesthesiologist is well positioned to define the value of anesthetic services, and just as important, to define the performance indicators used to measure that value in the current setting of increased governmental regulation, decreased governmental spending, and increased free market competition. Such a proactive stance will allow a clear voice in the process, analogous to the endorsement of Basic Standards for Monitoring by the American Society of Anesthesiologists. These standards were developed by anesthesiologists interested in documenting the quality of anesthesia care and decreasing anesthetic-related morbidity and mortality.
In contrast to the cardiac surgery and hospital performance report cards, to our knowledge, there are currently no widely available anesthesia report cards. We hypothesized that each consumer group (e.g., patients, insurers) would require specific pieces of information and therefore have included in their report cards only those elements necessary to meet their informational needs (Table 2). Standards against which to judge anesthesia performance are in many cases yet to be identified; therefore, currently available information must be used to establish comparative value. Until national benchmarks are established, it is our contention that locally (institutionally) developed standards of quality can be used to indicate areas of performance. The concomitant problems faced in the development of health-care report cards, however, including the quality of data sources, risk stratification, and data standardization, need to be addressed as performance measures are collected toward the goal of a national standard.
The system used by the Department of Anesthesiology at the Yale University School of Medicine (Figure 1) incorporates all facets of ongoing quality process assessments to develop a series of departmental report cards. The goal in creating this database was to track a large number of indicators simultaneously to satisfy all the potential requirements of different agencies. This was accomplished through compilation of clinical data from physicians and nurses on patients given anesthesia at all clinical locations and the ability to merge those data with educational, financial, and administrative databases. The internal clinical database was established to track performance trends of a large number of specific performance indicators simultaneously that occur during the administration of an anesthetic agent or in the immediate postanesthetic period. In addition to perioperative indicators, we have developed indicators to track performance in areas such as acute and chronic pain management. To accomplish this efficiently, a customized data entry form was developed to collect information from all clinical locations served by the department. Data are entered on the form by the anesthesia care team managing the patient in the perioperative period and by the nursing staff in the postanesthesia care and intensive care units. The form is then read by an electronic scanner, and data are downloaded into a database. The defined performance indicators include both medical and administrative events that affect outcome of care. The comprehensive nature of this composite database allows collection of patient demographics, intraoperative events, anesthetic technique, postanesthesia. care unit and intensive care unit sentinel and rate events, and training level of individual anesthesia practitioners. Thus, the composite database allows for analysis of both departmental and individual anesthesia caregiver practice patterns. Sentinel indicators are those measures that, by the nature of their clinical consequences, require individual review; rate indicators are those measures that require trend analysis and are reviewed on an ad hoc basis when deviation from established confidence intervals occurs. To determine patient satisfaction, every 35th patient receives a letter from the chair of the department requesting a 0–10 (Likert) rating on three specific areas of care: the preoperative visit, the postoperative visit, and the overall quality of care. Data are then coded and analyzed, and 95% confidence limits are developed based on internal departmental performance established during the previous 4 yr.
Having thus defined the indicators that measure quality and the value of the services provided, the department is then able to generate specific reports geared toward the informational needs within the department, within the institution, for governmental agencies, and toward consumer groups. The content and the design of each performance report depends on the target audience, but each report communicates performance and performance trends over a period of time.
The goal in creating an anesthesia report card was twofold:(1) to establish a standard of anesthetic care, and (2) to make that information easily accessible (Figure 1). To that end, each report card has a uniform presentation. Ninety-five percent confidence limits for any of 150 performance indicators from the composite database were established to reflect the performance trends over time. To facilitate data comprehension, a graphic presentation of the data was developed. Statistically significant deviation from the 95% confidence limit is represented by a black bar. Deviation from established confidence limit that is not statistically significant is represented by a white bar. Improved performance is represented to the left of the confidence limit bars, whereas performance that is worse compared with the previous 4 yr is represented to the right of the confidence limit bars. The total number of anesthetic agents delivered during the specified time frame covered by the report card is listed below the title. This allows for calculation of the exact number of cases involved in each performance category. Below the total number of cases, a caption lists the total number of performance indicators that are tracked, the number that show either increased or decreased incidence, and whether or not the changes are statistically significant. Each report uses internal trends as a comparative standard against which to compare present performance for each of the indicators of interest. The wide variety of clinical and administrative data collected and the ability to change the frequency of reporting are benefits that can readily satisfy informational needs.
For the insurance company or health maintenance organization interested in those clinical outcomes that have an economic effect, a report card is prepared that documents, for example, the rates of reintubation in the postanesthesia care unit or perioperative myocardial infarction (Figure 1). A major cardiac event may necessitate care in the intensive care unit and incur costs for invasive monitoring, laboratory services, further cardiac work-up, fixed room costs, professional services, or nursing services. Figure 1details such an annual report card. The synthesized data in this hypothetical example represent the total number of cases during calendar year 1996 as n = 13,427. Ninety-five percent confidence limits are established for each of the indicators based on the departmental performance (i.e., incidence) over the previous 4 yr. For instance, the first indicator, major cardiac event, may have an incidence of 0.4% during 1996. The 95% confidence limits of 0.5–0.8% for the incidence of major cardiac events were established for the previous 4 yr (1991–1995). Although the incidence of 0.4% is improved compared with the performance during the previous 4 yr, it is not a statistically significant improvement. The third indicator, spinal headache, has an incidence of 0.6%. This change is statistically significantly different from departmental benchmarks, and as such is highlighted to underscore the change in performance.
(Figure 2) illustrates a hypothetical report card directed toward the informational needs of an accrediting body. These performance indicators reflect the medical quality of the practice and incorporate the criteria put forth by the JCAHO performance indicator initiative. Although some of these indicators overlap with those in the report card created for health-care purchasers, the reasoning behind their inclusion is different. This report card reflects the quality of the care that is provided, not the economic effect of that care. As before, 95% confidence limits are established and displayed for each of the indicators.
(Figure 3) demonstrates a prototype report card directed toward consumers, including practice mode, provider certification, and patient satisfaction, and was tailored to address the concerns of consumers. The results of the patient satisfaction survey adhere to the confidence limit format to reflect a standard against which to compare current departmental performance. Comparisons for credentialing and practice mode are synthesized from data from the American Board of Anesthesiology, the American Association of Nurse Anesthetists, and the Abt Study. By using this composite data set, report cards can be generated for any group of indicators from the 150 that are tracked.
As health-care reform continues, it is increasingly necessary to define a level of comparative quality within the framework of cost-efficient use of resources. The use of quality assurance/improvement databases to document the value of anesthetic services begins to fill the void of comparative information. As the emphasis in health care increasingly turns toward comparative performance, an effort must be directed toward defining performance indicators, verifying and standardizing the collection and reporting of data, and demonstrating the benefits derived from the release of these performance data. There is strong concern that publishing inaccurate or incomplete report cards will destroy the public's confidence in the report card concept. With greater attention devoted to these issues, health-care report cards “will play a critical role in assessing plan performance and in establishing effective systems of accountability” to purchasers, public policy makers, consumers, and providers.*** Anesthetic performance measures must be developed to ensure that report cards reflect a broadened range and depth of clinical care. This will allow meaningful comparisons and give customers (managed care organizations, patients) the ability to choose the anesthesia group or health-care plan that best satisfies their needs. The advancement and documentation of performance must be viewed as an investment in the future of the field of anesthesiology as health-care reform restricts technologic advances, promotes alternative care providers, and constrains budgets already stretched to their limits.
*Raising Medicare standards. New York Times, December 29, 1996, Section 4, p 8.
**1987 Joint Commission Standards for Healthcare Organizations. Oakbrook Terrace, Joint Commission on Accreditation of Healthcare Organizations, 1987.
***General Accounting Office:‘Report cards’ are useful but significant issues need to be addressed (publication no. GAO/HEHS-94–219). Washington, DC, United States Printing Office, September 1994.
****Mahar M: Time for a checkup. Barron's, March 4, 1996, pp 29–35.
*****US Congress, Office of Technology Assessment: The Quality of Medical Care: Information for Consumers (OTA-H-386). Washington, DC, United States Printing Office, June 1988.
******Coronary Artery Bypass Surgery in New York State: 1989–1991. Albany, New York State Department of Health, December 1992.
********Zinman D: Ranking open heart surgery. Newsday, December 5, 1990; 4:33.