THE events of the past 2 yr since the initial controversial publication by Mangano et al. 1regarding the effect of aprotinin on renal failure and mortality have been of great interest to the anesthesiology community. In this issue of Anesthesiology, Dietrich et al. 2test the hypothesis that any potential nephrotoxic effect of aprotinin (Bayer Healthcare Inc., Toronto, Ontario, Canada) is dose dependent. This hypothesis is rooted in the original article by Mangano et al. , where they reported an increase in risk of the composite renal outcome from 7% to 18% (P < 0.001) comparing low- and high-dose aprotinin. Dietrich et al. were unable to replicate this finding in either univariate or multivariate analysis (odds ratio, 0.98; 95% confidence interval, 0.90–1.07) in an adequately powered and statistically robust study, for identical renal outcome measures to those used by Mangano et al. Importantly, three reality checks of the results are reassuring: the overall incidence of the composite outcome in aprotinin-treated patients (8.2%) is similar to that reported by Mangano et al. (8%); the multivariable model reported by Dietrich et al. contains clinically sensible variables that are in overall concordance with those reported by Mangano et al. ; and finally, analysis of each of the effects of aprotinin dose on each of the individual components of the composite renal outcome variable shows a lack of effect of aprotinin.
No matter which side of the debate one takes, there is little doubt that much of the discussion has been fed by lack of information. The report by Mangano et al. was questioned for its lack of detail and conflict with prior publications from the Multicenter Study of Perioperative Ischemia group.3The furor was further fueled by a notable lapse in judgment by Bayer, when at the US Food and Drug Administration (FDA) public meeting on aprotinin on September 21, 2006, Bayer officials failed to disclose preliminary findings of a large observational cohort being examined by faculty of the Harvard School of Public Health (Boston, Massachusetts), at Bayer's request, that supported the findings of Mangano et al. *These preliminary findings were later repudiated.†‡
So, we are left with a conundrum. Why do two studies, performed by respected and statistically savvy researchers using similar surgical populations, show diametrically opposing results? Subtle but important differences in study design and definitions may contribute to this discrepancy. Alternatively, access to the raw data in both data sets could allow a more complete analysis and perhaps allow one to resolve the discrepancy. One merely has to look at the Web site of the National Center for Biotechnology Information§to grasp the ready availability and power of such information. The penultimate example of the disbursement of information is seen at the National Institutes of Health Database of Genotype and Phenotype,∥where the complete genotyping of more than 32,000 individuals is available to accredited researchers. Secure methods for data deposition and distribution that “demonstrate a new commitment to shared scientific knowledge that should facilitate rapid advances” are logistically feasible and imperative.4Quite simply, it is time that journals encourage the public availability of source data as a prerequisite for publishing human drug studies.
It is also time that this obligation be extended to the drug approval process and the data provided by pharmaceutical companies to the FDA. The FDA does not require full disclosure of all information that comprises a New Drug Application. It is time that data from every patient reported to the FDA for a New Drug Application should be made available to the research community. Arguments against such action that invoke proprietary information and patient confidentiality can be countered by review of which data should be released and by the benefit of such release to the public. In the heyday of support of faster drug approval seen in the early 1990s, Congress passed the Prescription Drug User Fee Act to streamline drug approval by increasing FDA fees collected from drug companies. During this process, Congress barred the FDA from applying user fees to support efforts in postapproval drug safety monitoring—a profound error that was not reversed until 2002. Since then, withdrawal of previously approved drugs, notably valdecoxib, nefazodone, and rofecoxib, and “black box” warnings for rosiglitizone, celecoxib, depot medroxyprogesterone, warfarin, omalizumab, and aprotinin have typified the FDA's improved ability to perform postapproval monitoring of drug safety. Importantly, some of these events would not have occurred without sentinel findings of dedicated researchers working outside the FDA process. The FDA's improved ability is strengthened by the congressionally requested report of the Institute of Medicine calling for increased regulatory power, funding, and independence of the FDA.5Implementation of some of the recommendations are present in the reapproval of Prescription Drug User Fee Act (S.1082) that passed the House and Senate on September 21, 2007, further enhancing the FDA's regulatory powers. However, it is time that such powers invoke increased responsibility and effort, including public availability of raw data.
Department of Anesthesiology, Perioperative and Pain Medicine, Harvard Medical School, Brigham and Women's Hospital, Boston, Massachusetts. firstname.lastname@example.org