“[Anesthesiology announces] another initiative on data analysis plans in observational research.”
Three years ago, in an inaugural message, I described the goal of Anesthesiology to maximize both the richness and the reach of its content, while satisfying our customer(s), achieving the journal’s mission, and meeting the American Society of Anesthesiologist’s (ASA) desire to improve scientific understanding, clinical practice, and education in the specialty. Service to all our customers (ASA members and nonmembers, authors, readers, citers, advertisers, private practitioners, academics, pure scientists, physician–scientists, anesthesiologists, nonanesthesiologists, the specialty, medicine more broadly, healthcare payers, regulators, and policymakers) means advancing the practice and securing the future through knowledge production and diffusion, all closely linked to clinical practice.1 In January of this year, we formalized our value proposition to all these customers, “Trusted Evidence: Discovery to Practice,” and proudly placed it prominently on the cover of every issue.2 Grounded on feedback from our readers, it signaled the value of our content, and to our readers, Editorial Board, and ASA, it is a statement and reminder of our purpose and aspiration.
The richness of our content and vehicles for its delivery continue to expand. Earlier this year we introduced a new category of article, the Readers’ Toolbox, and the first of its type, Understanding Research Methods.3 These articles, written as primers for the nonexpert, aim to help our readers keep pace with existing and emerging research methods so they can better read and understand the original investigations published in the journal and elsewhere. They are also intended to be helpful to authors, who will be able to efficiently summarize their research methods and cite the Toolbox article in lieu of repeating large amounts of research methods text.
Anesthesiology continues to enhance our multimedia presentations. The monthly Editor-in-Chief podcast, which summarizes several articles from each issue, is expanding again. It has moved from English only to versions now in Mandarin Chinese, Japanese, Spanish, and Portuguese. We are expanding the podcast again, to French, which should enlarge our reach to France, and the French-speaking countries of Africa and elsewhere. We are increasing the number of video abstracts, which are 3- to 5-min animated explanations of a featured original investigation or review, from one per month to the opportunity for several per month. One video will remain an Editor’s Choice, selected by the Editors, but authors can now elect to have their article appear as a video as well. The number of visual abstracts, which are concise one-panel graphic summaries of research articles, will also expand. Perhaps most noticeable is the entire redesign of the print journal, from cover-to-cover and color-to-color. Readers tell us that the new format is more interesting, enjoyable, and easier to read. Another goal and collateral benefit of our redesign and expanded features is for journal content to be reused for educational purposes. Visual abstracts are perfect for tweeting and retweeting. The combination of the new article title box, side by side with a visual abstract, makes for an excellent presentation slide.
The peer review process and educational value of articles do not stop at publication. Letters to the Editor from interested readers provide valuable perspectives and often raise questions about research results, request additional information from article authors, and demonstrate that interpretations of results can vary and the dialogue can be informative. To better establish the continuity and “findability” of letters to the editor and author replies, we standardized the format. Formerly, letter authors crafted their own title to a letter, and the article author’s response was simply titled “In Reply,” but it was not apparent to what article or letter they pertained. This also impaired search capability. For all letters, we now use a shortened version of the original title (abbreviated title), followed by “Comment,” and for the response we use the same shortened title, followed by “Reply.” Thus, the correspondence chain is more tightly tied together and to the original article, and this enables linking and easier discoverability in PubMed.
Anesthesiology is interested in understanding the value of our content and the impact (an overused but descriptive term) on the specialty and science more broadly. Bibliometrics has evolved into its own domain, and there are numerous “metrics” for evaluating journals. One of the oldest metrics of “impact,” and still the most widely tracked and disseminated, is the “impact factor.” The concept of the journal “impact factor” was developed in the 1970s as a tool in journal evaluation and more specifically “value.”4 It is a measure of the frequency with which the “average” article in a journal has been cited in a particular period. Impact factor is calculated from the number of citations to a journal’s articles published in the previous 2 yr divided by the number of “citable” items published. Almost since its inception, and with growing frequency and fervor, the impact factor has been challenged if not derided as a measure of journal value or quality.5,6 Evaluating scientific quality is notoriously difficult,5 but we live in a metric-obsessed world, whose obsession is enabled and facilitated by endless amounts of “data,” and the data do exist with which to easily calculate the impact factor.
Although Anesthesiology had the highest impact factor of any anesthesia journal last year and the highest in its history, the impact factor is nonetheless widely recognized as an imperfect measure of journal value or article quality. It is a lagging indicator, not one of currency.5,6 The exact formula for calculation is proprietary, may change, and lacks clarity. In addition, the underlying data can be erroneous. As a number, it can be influenced (or “gamed”) by journals themselves. Perhaps most importantly, the impact factor and citation analysis do not do two things. First, the impact factor only reflects a whole journal “average” over a 2-yr period but does not inform on the quality or value of any specific article. Second, although citation analysis informs on the number of times an article is cited by other articles, it represents only the “votes” of a small group of “voters”—those authors who publish articles. It does not represent the larger community of “customers”—those who read the journal. How else can we measure the value of Anesthesiology to our readers?
The International Committee of Medical Journal Editors is a small working group of general medical journal editors who formulate recommendations for the conduct, reporting, editing, and publication of scholarly work in medical journals. Consistent with evolving thought, the International Committee of Medical Journal Editors has opined that “The journal impact factor is widely misused as a proxy for research and journal quality and as a measure of the importance of specific research projects or the merits of individual researchers, including their suitability for hiring, promotion, tenure, prizes, or research funding.” The International Committee of Medical Journal Editors recommends that journals reduce the emphasis on impact factor as a single measure and instead provide a range of article and journal metrics relevant to their readers and authors.
Consistent with this concept, and with the desire for Anesthesiology to understand the value of the Journal to our readers, as well as our citers, we evaluate the number of times that an article is read, either online or via download. To this end, we provide the 20 most read articles for 2017 (table 1) and 2018 (table 2). The 2018 data will, of course, be more influenced by when an article was published—one published in January 2018 will have had more time to be viewed than one published in December 2018.
The top line message is that Anesthesiology content is valued and viewed. All 2017 content was read (viewed online or downloaded as a pdf) 836,000 times in 2018, and all 2018 content was read 856,000 times in 2018. All of the most-read content for 2017 and 2018 are clinical articles, mostly original investigations and some reviews and predominantly in perioperative medicine with some in critical care and pain medicine. The most widely read 2018 article and the third most widely read 2017 article were both ASA Practice Guidelines. These data illustrate just how important ASA Practice Guidelines are to ASA members, the specialty, and medicine in general and how vital Anesthesiology is in their availability and dissemination. The clinical focus of the most-read articles is not surprising, as most of our readers are clinical practitioners, but it is valuable to know that Anesthesiology is an important source of high quality content to inform our practice and to provide trusted evidence.
Looking to the future, we will continue to focus on the transparency of reporting in original research articles. One initiative will be on sex as a biologic variable. More on that in a future editorial.
Another initiative is on data analysis plans in observational research. Anesthesiology previously began requiring authors of submitted manuscripts to transparently report whether a predefined statistical analysis plan had been established and the essential elements of the statistical analysis plan and strongly encouraged reporting the date of a documented submission of an analytical plan to a peer review or registration forum.7 It did not, at that time, require registration of observational studies in the same way that clinical trials are registered. Since then, author participation in these requirements and their enforcement has been inconsistent. More importantly, the field of observational research continues to evolve and grow, and so too should the expectations of our readers in the transparency of methods and reporting in observational studies.
Members of the Editorial Board met recently to discuss these expectations and our need to evolve with them and the field. Our goal was to increase the number of high-quality observational studies submitted to the journal, improve and systematize our evaluation process, and increase the quality and impact of our publications. Although some Editors advocated for a common standard for clinical trials and observational studies—that is, registration—not all concurred, and Anesthesiology will not presently require registration of observational studies, although that may be inevitable.
Anesthesiology does, however, clearly value transparency of data reporting and hereby updates its expectations and requirements. Authors of observational studies should consult the guidelines published by the STROBE group (Strengthening the Reporting of Observational Studies in Epidemiology) and report accordingly.7 Anesthesiology will require explicit statement in manuscripts of whether a data analysis and statistical plan was defined before accessing the research data. Authors will be required to include one of the following sentences in the Methods section of the manuscript that describes this process: A data analysis and statistical plan was (1) written and posted on a publically accessible server (ClinicalTrials.gov, or other) before data were accessed; (2) written and filed with a private entity (institutional review board or other) before data were accessed; (3) written, date-stamped (permanent dated electronic signature), and recorded in the investigators’ files before data were accessed; or (4) written after the data were accessed. If there was an a priori data analysis and statistical plan (numbers 1 to 3 above), authors are requested and strongly encouraged to include the plan as supplemental digital content at the time of initial manuscript submission.
The above transparency requirement does not aim or intend to discourage the reporting and publication of exploratory analyses, when appropriate. As in clinical trials, where inevitably not all results and outcomes can be predicted or anticipated and some post hoc analyses are added, the same can be true of observational studies. Such analyses outside the a priori analysis plan need to be interpreted appropriately and can occasionally lead to important discoveries. However, authors need to transparently explain this approach and justify why it was done in the manuscript. Readers expect this for clinical trials and will come to expect it for observational studies. We view it as indispensable to the publication of trusted evidence. We look forward to helping advance and lead the specialty in observational research.
And, as always, we will go where science takes us.
Dr. Kharasch is the Editor-in-Chief of Anesthesiology and his institution receives salary support from the American Society of Anesthesiologists (Schaumburg, Illinois) for this position.