“A razor may be sharper than an ax, but it cannot cut wood.”
IT would be an understatement to say that molecular biology has revolutionized medicine by increasing our understanding of the pathophysiologic mechanisms of disease and ability to assess genetic risk. Since the completion of the Human Genome Project in 2003, we have witnessed the use of genomics for the rapid identification of newly discovered pathogens, such as that involved in the severe acute respiratory syndrome; the use of gene-expression profiling to assess cancer prognosis and guide therapy; the use of genotyping to stratify patients according to the risk of a disease, such as prolonged QT interval syndrome or myocardial infarction; the use of genotyping to increase our understanding of drug pharmacokinetics and pharmacodynamics; and the use of genetics for tissue engineering and the cloning of several different species.1,2However, we have been slow to apply many of these novel, cutting-edge molecular techniques within our own discipline.
In this issue of Anesthesiology, Lucchinetti et al. 3use a GeneChip microarray (Affymetrix, Santa Clara, CA) to perform myocardial genetic expression profiling of patients receiving intravenous versus inhalational anesthetics during off-pump coronary artery bypass graft surgery. Microarray technology is a powerful and elegant tool for genetic research that uses nucleic acid hybridization techniques and recent advancements in computing technology to evaluate the messenger RNA (mRNA) expression profile of thousands of genes within a single experiment. Labeled complementary DNA or complementary RNA targets derived from the mRNA of an experimental sample are hybridized to nucleic acid probes (i.e. , gene fragments) attached to a solid support (i.e. , a “chip”). By monitoring the amount of label associated with each DNA location, it is possible to infer the amount of each mRNA species represented. In addition to analyzing gene expression on a genome-wide scale, other important microarray applications include genomic resequencing, genotyping, genome-wide exon analysis, and transcript mapping.4Moreover, microarray technology offers the unprecedented opportunity to measure gene expression in relation to physiologic and environmental factors, and has great potential for clinical and pharmacologic applications.4
Because microarray experiments literally involve the comparison of thousands of data points, the scientific community has grappled with identifying specific guidelines for the conductance, statistical analysis, and interpretation of microarray experiments due to the significant potential for false positives (i.e. , type I error). To this end, the Microarray Gene Expression Data Society, an international organization of molecular biologists, computer scientists, and data analysts, developed standards known as the Minimum Information About a Microarray Experiment (MIAME), which outlines the minimum information that should be reported about a microarray experiment to enable its unambiguous interpretation and reproduction.5*In addition adhering to the MIAME guidelines, Lucchinetti et al. analyzed their microarray data using a highly sophisticated technique known as gene set enrichment analysis. Gene set enrichment analysis is a computational method that determines whether an a priori defined set of genes (as opposed to individual genes) shows statistically significant, concordant differences between two biologic states (e.g. , phenotypes).6Gene set enrichment analysis involves three steps: (1) calculation of an enrichment score, (2) estimation of the enrichment score significance level, and (3) adjustment for multiple hypothesis testing. This last step involves controlling for the proportion of false positives by calculating the false discovery rate (FDR) corresponding to each normalized enrichment score.6This is very important because gene set enrichment analysis involves the comparison of hundreds, if not thousands, of gene sets (547 gene sets were compared in the analysis by Lucchinetti et al. ). Indeed, most major journals are now requiring that the FDR for each gene set be determined and that gene sets with an FDR > 0.25 be excluded from the final analysis. Alternatively, instead of simply fixing a level at which to control the FDR, one may calculate the FDR q value, which is defined to be the FDR analog of the P value.7Specifically, the FDR q value for a particular feature is the expected proportion of false positives incurred when calling that feature significant.7
In the study by Lucchinetti et al. , the authors conclude that their microarray data suggests that the proliferator-activated receptor γ coactivator-1α and granulocyte colony-stimulating factor survival pathways play key roles in perioperative myocardial protection. As opposed to specifying a specific FDR cutoff for gene set exclusion in the study methods section, Lucchinetti et al. instead calculated the FDR q value for each gene set. This is important because the proliferator-activated receptor γ coactivator-1α pathway was used as an indicator of preoperative “myocardial background energy metabolism” in their multivariate analysis despite the fact that this pathway had an FDR q value of 0.26. That is, 0.26 is the expected proportion of false positives incurred when we call the proliferator-activated receptor γ coactivator-1α pathway significant.7
Similarly, these authors also conclude that their data suggest that sevoflurane reduces the transcriptional activity of genes involved in fatty acid oxidation (FDR q value = 0.33) and DNA-damage signaling (FDR q value = 0.11) while increasing the transcriptional activity of genes in the granulocyte colony-stimulating factor survival pathway (FDR q value = 0.10) compared with patients receiving propofol. Again, the FDR q value for a particular feature is the expected proportion of false positives incurred when calling that feature significant.7
Does this mean the conclusions by Lucchinetti et al. are incorrect? No, but it does suggest the potential for a high proportion of false positives in the data on which their conclusions are based. In fact, genetic statisticians have yet to determine whether and what cutoff value should be used for the FDR q value (although many would argue for a FDR q value < 0.10).7But it does highlight the significant statistical problems of handling genomic data that involve thousands of multiple comparisons. Guarding against any single false positive occurring is often much too strict and will lead to many missed findings. The goal is therefore to identify as many significant features in the microarray data as possible while incurring a relatively low proportion of false positives.7
In summary, Lucchinetti et al. are to be commended for taking advantage of and applying such cutting-edge molecular techniques as gene microarray screening to our discipline. Furthermore, publication of such microarray data should not only adhere to MIAME standards, but should also involve careful controls for the FDR (i.e. , type I error). Without such blunt statistical instruments as the FDR controlling for the large number of genetic comparisons, it is going to be difficult to see the forest for the trees.
Division of Cardiovascular Anesthesiology, Baylor College of Medicine, Texas Heart® Institute, St. Luke's Episcopal Hospital, Houston, Texas. email@example.com