View this email in your browser

Many people are saying...


...the scientific publishing process has been tarnished during the COVID-19 pandemic.
This piece is courtesy of the FLARE team.
The FLARE Four:
  1. A critical need for therapies and preventative strategies for COVID-19 has led major journals to adopt expedited review processes and publish preliminary and observational reports that may not have received wide attention prior to the pandemic.
  2. Rapidly drafted and reviewed papers have had a number of errors in experimental methods, analysis, and presentation resulting in misunderstandings, corrections and retractions. These errors are especially problematic in a time when clinicians and policymakers may change clinical practice and policy based on early findings. 
  3. Publication of preliminary findings has arguably not resulted in more rapid development of new therapies - the sole approved therapeutic approach (remdesivir) was approved by the FDA only after a randomized controlled trial. 
  4. In future public health emergencies, the scientific press should consider the balance between expediency and careful peer review to ensure the integrity of findings prior to widespread dissemination.
An unprecedented volume of academic literature has been produced since the start of the COVID-19 pandemic. This has taken the form of editorials, observational case series and studies (see May 8th FLARE) as well as rapidly designed and implemented trials. Much of this literature has been made available to the public at record speed on pre-print servers or through emergency expedited review processes at major journals (Rubin et al., 2020). FLARE itself (which stands for Fast Literature Assessment and REview) was borne out of this sense of urgency to evaluate and disseminate the data. The desire for rapid dissemination of important results has, however, resulted in a great deal of attention given to small case series, preliminary and uncontrolled reports, and hastily implemented clinical trials. This attention has been paid not only in academic circles, but in the lay press, which has amplified even the most preliminary, observational, and flawed data, often with unintended consequences. Moreover, it is not clear that this focus on expediency has improved the care we provide patients or resulted in speedier development of novel therapies. Many such observations have since been overturned, and the care of COVID-19 patients still consists largely of long established practices. Remdesivir, the sole exception, was approved after an RCT was published, closely on the heels of observational reports, suggesting the observational reports themselves did not speed its adoption. In tonight’s FLARE, we explore some examples of the missteps in and unintended consequences of the expedited review process.
Expedited Publication
Ironically, a non-peer-reviewed pre-print examined the duration of 14 academic journals’ scholarly review processes prior to and during the COVID-19 pandemic (Horbach, 2020). They found an average reduction in turnaround time from submission to publication of 49%. Of note, several of the most influential medical journals including JAMA, The Lancet, and the New England Journal of Medicine (NEJM) were left out of this analysis due to lack of adequate data but have implemented expedited review processes during the pandemic (Oxford Academic, 2020; Rubin et al., 2020). Lessons may be learned from the expedited publication approach. Perhaps the normal review process is too slow, and a new model for expedited review will be established. Alternatively, if retractions and lingering questions about data quality characterize this period, there may be more enthusiasm for the normal deliberate (albeit slow) review process. 
Figure 1. Timing of submission and publication as related to the COVID-19 pandemic (Kwon, 2020).
High Profile Observations
In addition to expedited review, high profile journals have been eager to publish the sort of preliminary, hypothesis-generating data that they might not have even considered in earlier times. A large number of studies published on COVID-19 to date are observational in nature (Auld et al., 2020; Huet et al., 2020; Mitra et al., 2020; Richardson et al., 2020; Schenck et al., 2020; Ziehr et al., 2020). Many of these reports represent single center experiences with limited follow-up. Notwithstanding these limitations, many such observational reports have been published in high-profile journals. Even when the limitations of the data are appropriately discussed, the mere appearance in a high impact journal can cause misunderstanding. This was the case with the widely misreported mortality data from the Northwell Health system in New York (Richardson et al., 2020) (April 23 FLARE). At the time of publication, the authors had definitive outcome data on 320 mechanically ventilated patients, out of a total of 1151 patients who had been put on ventilators. The major reason for the limited data was the extremely short follow up period (length of stay for those 320 patients was ~4 days) as the authors excluded patients who were ventilated, alive and still hospitalized. The authors calculated a mortality rate of 88% which was widely reported in the popular press. The journal later issued a correction, noting that living patients should not have been excluded from the mortality calculation and the true mortality at the time of publication was ~24%, though with many patients yet to achieve a definitive outcome.  Despite the correction, “9 out of 10 mortality” had already become a theme of many media reports and online discussions about “alternatives to ventilators,” and this erroneous number remains widely quoted.

Other observational reports have contained long understood methodological flaws that were overlooked in the interest of rapid dissemination or, alternatively, missed in peer review entirely. This was the case with a letter published in the Journal of the American College of Cardiology that reported better outcomes for mechanically ventilated patients receiving anticoagulation compared to those who did not (Paranjpe et al., 2020). We review this extensively in our May 8th FLARE, but briefly, this was an example of immortal time bias - patients had to remain alive until the time they received anticoagulation in the anticoagulation group, whereas those who did not receive anticoagulation were counted on hospital day 1-- the treatment group outcome therefore benefits from including all the patients who survived long enough to receive anticoagulation. Despite receiving little discussion in the lay press, immortal time bias such as this likely invalidates the results of a number of COVID-19 reports. 

Other reports have small cohorts with poorly controlled sources of confounding. A recent cohort study of anakinra, published in The Lancet (Huet et al., 2020), compared its use to historical controls. To start, it is striking that patients were selected for this study of “severe forms of COVID-19” based on “either laboratory-confirmed SARS-CoV-2 or 'typical' lung infiltrates on a lung CT scan.” Therefore, not all patients in the study actually had a diagnosis of COVID-19 -- specifically 97% in the historical control and 88% in the anakinra group had positive RT-PCR for SARS-CoV-2. The groups were small and had notable differences in baseline characteristics. The Kaplan-Meier curves separate markedly on Day 0 and continue to diverge in an implausible manner (Figure 2). The authors proceed to describe, with inappropriately causal language, that “anakinra significantly reduced both need for invasive mechanical ventilation in the ICU and mortality among patients with severe COVID-19, without serious side-effects.” 
Figure 2. Kaplan-Meier survival curves separate on day 0, indicating there were meaningful differences between the groups at initiation of therapy (Huet et al., 2020).
A third high profile observational report, which has since been corrected, appeared in the NEJM and was also covered in a prior FLARE (April 16). In mid-April, a compassionate use report of remdesivir sponsored by Gilead Sciences sent stock prices soaring when it reported early positive results (Grein et al., 2020). First and foremost, this report not only lacked randomization, but it also lacked a comparison group. Treatment was started at various stages of illness, and the measure of improvement was subjective. Further, since access to remdesivir was through compassionate use protocols only, providers had to apply to Gilead before being allowed to provide drug to a patient, essentially allowing Gilead to select treatment cases. In addition to these methodologic issues (declared by the authors), the study turned out to contain a flawed statistical analysis. As pointed out in a subsequent letter to the journal, the authors reported that 84% of the patients on remdesivir improved. However, they inappropriately censored patients who died in their Kaplan-Meier analysis - rather than using a standard approach of counting them as not improved for the entire period of observation. This is, in effect, the complement of the problem with the Northwell paper discussed above (where the authors excluded living patients from a mortality calculation); here the authors excluded dead patients from the calculation of the percentage of patients who improved. 
Bad Data
In addition to faulty analysis, much attention had recently been given to high profile papers that contain erroneous or potentially fraudulent data. The Lancet and NEJM both recently issued retractions of papers (Mehra et al., 2020a, 2020b) that made use of a highly suspect proprietary database of electronic health records administered by Surgisphere, a Chicago-based company. The first paper published on May 1st, 2020 in NEJM claimed to reconfirm prior observational data about the link between cardiovascular disease and mortality in COVID-19, and refuted a link between ACE inhibitors and adverse outcomes in this context (Mehra et al., 2020a). The second, published on May 22nd, 2020, in The Lancet, reported no benefit of hydroxychloroquine or chloroquine on outcomes for hospitalized patients with COVID-19, and in fact claimed there was decreased in hospital survival with higher rates of arrhythmias when these drugs were used (Mehra et al., 2020b). This article was published for less than 2 weeks prior to retraction, but much of the damage was already done. Several hydroxychloroquine trials around the world paused enrollment due to fears provoked by the study results, including one led by the World Health Organization. While some trials are resuming enrollment again, confusion abounds. 

Of note, a third study using the Surgisphere database with several of the same authors claiming a large reduction in mortality in COVID-19 patients treated with ivermectin mysteriously disappeared from the SSRN social sciences pre-print server (Ledford and Van Noorden, 2020). Despite the paper’s removal from the server, ivermectin has seen a surge in popularity in several Latin American countries, including its addition to the national COVID-19 guidelines of Peru (ISGlobal, 2020). 

The literature on hydroxychloroquine (HCQ) is particularly dicey. One of the earliest reports on HCQ, responsible for much of the original interest in the drug, was from a very small and very problematic French study (Gautret et al., 2020). The issues with this study were described in prior FLARES (March 22 and April 29). This was a study of 26 patients treated with HCQ and 16 control patients followed for clearance of SARS-CoV-2 by PCR. However, no quality control was done on the PCR, treatment decisions were not standardized in any way, and 5 deteriorating HCQ patients were excluded from the final analysis. These significant flaws lead to unusual post-publication peer-review of the paper
Perhaps one of the most discussed papers on COVID-19 was, perversely, one of the ones with the least actual data. This paper by Gattinoni appeared in the American Journal of Respiratory and Critical Care Medicine (Gattinoni et al., 2020) and informally described the authors’ observations on COVID-19 patients in Italy: “The primary characteristic we are observing (and has been confirmed by colleagues in other hospitals) is a dissociation between their relatively well-preserved lung mechanics and the severity of hypoxemia”. In support of this assertion, the authors offered the following histograms which they say are drawn from observations on 16 patients:
Figure 3.  Respiratory system compliance and shunt fraction from 16 patients with COVID-19 
(Gattinoni et al., 2020).
A glance at the histograms reveals that there are 67 measurements of compliance and 44 measurements of shunt fraction. There are basic issues with these measurements - for example, the FiO2 at time of measurement is not provided and so the “shunt fraction” may include significant contributions from low V/Q. Even more fundamentally, all of these observations were made on only 16 patients and it is not clear how the presented data, with multiple observations per patient and a mismatched number of observations, could either establish or disprove an association between compliance and shunt. Although significant attention has been paid to this letter, and some (including the authors) even argued it established the need for alternative ventilator strategies, there have been no data published thus far that establish this purported disconnect. In fact, multiple reports since established that there is nothing unusual about the compliance and gas exchange in COVID-19 patients (Auld et al., 2020; Bos et al., 2020; Huet et al., 2020; Mitra et al., 2020; Richardson et al., 2020; Schenck et al., 2020; Ziehr et al., 2020). It is not unusual that published data may conflict, but in this case the early publication and popularization of a hypothesis based on very limited observational data has been stubbornly persistent in the face of significant data to the contrary.
Can a rushed, faulty paper be helpful?
Another paper that has received a great deal of criticism potentially illustrates at least one circumstance in which flawed, rapidly published data might be helpful. On January 30 (Rothe et al., 2020), NEJM published a report of apparent asymptomatic spread of SARS-CoV-2. The short report detailed a cluster of cases in Germany that resulted from exposure to a traveller from China who was apparently asymptomatic during her visit. The authors were unable to contact the traveler prior to publication, but the people she met with describe her as well and without symptoms. This was obviously a dramatic report, but it subsequently came under criticism when the woman was contacted and reported that she had, in fact, been experiencing malaise and taking antipyretics during her stay in Germany. Nevertheless, subsequent reports (Arons et al., 2020) confirmed that patients may be pre-symptomatic at the time of a positive PCR. This is based in part on a study at a nursing home in Seattle at which 24 patients who were swabbed as part of an outbreak investigation were without symptoms at the time of positive nasopharyngeal swab but subsequently became symptomatic. In this case, the initial report turned out to be wrong - that particular instance was not one of asymptomatic spread. However, the conclusion - that asymptomatic or pre-symptomatic transmission is possible - was correct. One could argue that this flawed report, and knowing early about the possibility of asymptomatic transmission, had a significant and positive impact on personal behavior and public health policies that reduced potential transmission. However, it is also by pure chance that this erroneous report predicted future, better validated findings. It is therefore difficult to attribute significant benefit to flawed data.
What have we gained?

What was the purpose of doing these rapid publications and expedited reviews? With the arrival of the COVID-19 pandemic, healthcare providers were desperate for information that could help treat these patients. Researchers and journals reacted to this dearth of knowledge by attempting to collect and release data as fast as possible. This rapid sharing of information has galvanized the scientific profession to put their collective minds together to tackle this global problem. Overall this massive effort has led to impressive ideation, study design, and implementation in the COVID-19 space. Much of this has been high-quality observations and well-designed trials that will better inform our ability to take care of these patients. However, this type of careful science takes time. A particularly notable feature of  scientific publication in the COVID-19 era has been an eagerness to publish very preliminary data even for therapies for which RCTs are already up and running. For example, the NEJM paper on compassionate use of remdesivir appeared only weeks before the first data from the NIH remdesivir clinical trial. Similarly, lists dozens of ongoing trials of anticoagulation in COVID-19. Nevertheless, observational (White et al., 2020) data of questionable generalizability (Wichmann et al., 2020) on thromboembolism continue to be published. RCTs are also underway on a number of immunomodulation strategies, yet potentially confounded observational reports (Luo et al., 2020; Xu et al., 2020), and even letters based on expert opinion (Mehta et al., 2020) continue to appear. In this environment, with RCTs soon to be available, is it likely that expedited publication of single center or poorly controlled observational studies will lead to better or more rapid understanding of potential therapies?

While there are some situations in which quick dissemination of results may be helpful for advancing care in a time of crisis, the same crisis places a greater burden on the part of authors and publishers to get things right. The worst outcome may not be to be late, but rather, to be wrong in a time when the eyes of the media, politicians, and the public are intensely focused on the academic literature. An author may not be responsible for all of the possible interpretations of their data, but they are responsible for providing the right data. Disagreements about data interpretation, with replies to correspondence and letters to the editor, are a normal part of the academic process. However, in a time when results are snapped up and taken as truth, extra care is warranted in getting right what we publish. Similarly, the responsibility of reviewers remains the exacting, detailed and critical appraisal of submitted manuscripts.

As the COVID-19 pandemic evolves, and in future public health emergencies, the academic press should reconsider the balance between expediency and careful peer review to protect the integrity of findings disseminated to the academic community and the public at large.
FLARE is a collaborative effort within the Pulmonary and Critical Care Division and the Department of Medicine at Massachusetts General Hospital. Its mission is to appraise the rapidly evolving literature on SARS-CoV-2 with a focus on critical care issues.

All prior FLAREs are available here:
Link to: Disclaimer.
Thank you for everything you are doing!
Did you receive this as a forward? Sign-up here to receive our daily FLAREs!
Interested in hearing more about a specific SARS-CoV-2 topic? Let us know via email. Interested in hearing more about a specific SARS-CoV-2 topic? Let us know via email.
  • Arons, M.M., Hatfield, K.M., Reddy, S.C., Kimball, A., James, A., Jacobs, J.R., Taylor, J., Spicer, K., Bardossy, A.C., Oakley, L.P., et al. (2020). Presymptomatic SARS-CoV-2 Infections and Transmission in a Skilled Nursing Facility. N. Engl. J. Med. 382, 2081–2090.
  • Auld, S.C., Caridi-Scheible, M., Blum, J.M., Robichaux, C., Kraft, C., Jacob, J.T., Jabaley, C.S., Carpenter, D., Kaplow, R., Hernandez-Romieu, A.C., et al. (2020). ICU and Ventilator Mortality Among Critically Ill Adults With Coronavirus Disease 2019. Crit. Care Med.
  • Bos, L.D.J., Paulus, F., Vlaar, A.P.J., Beenen, L.F.M., and Schultz, M.J. (2020). Subphenotyping ARDS in COVID-19 Patients: Consequences for Ventilator Management. Annals of the American Thoracic Society.
  • Gattinoni, L., Coppola, S., Cressoni, M., Busana, M., Rossi, S., and Chiumello, D. (2020). COVID-19 Does Not Lead to a “Typical” Acute Respiratory Distress Syndrome. Am. J. Respir. Crit. Care Med. 201, 1299–1300.
  • Gautret, P., Lagier, J.-C., Parola, P., Hoang, V.T., Meddeb, L., Mailhe, M., Doudier, B., Courjon, J., Giordanengo, V., Vieira, V.E., et al. (2020). Hydroxychloroquine and azithromycin as a treatment of COVID-19: results of an open-label non-randomized clinical trial. Int. J. Antimicrob. Agents 105949.
  • Grein, J., Ohmagari, N., Shin, D., Diaz, G., Asperges, E., Castagna, A., Feldt, T., Green, G., Green, M.L., Lescure, F.-X., et al. (2020). Compassionate Use of Remdesivir for Patients with Severe Covid-19. N. Engl. J. Med.
  • Horbach, S.P.J. (2020). Pandemic Publishing: Medical journals drastically speed up their publication process for Covid-19.
  • Huet, T., Beaussier, H., Voisin, O., Jouveshomme, S., Dauriat, G., Lazareth, I., Sacco, E., Naccache, J.-M., Bézie, Y., Laplanche, S., et al. (2020). Anakinra for severe forms of COVID-19: a cohort study. The Lancet Rheumatology.
  • ISGlobal (2020). Ivermectin and COVID-19: How a Flawed Database Shaped the Pandemic Response of Several Latin-American Countries - Blog.
  • Kwon, D. (2020). How swamped preprint servers are blocking bad coronavirus research. Nature 581, 130–131.
  • Ledford, H., and Van Noorden, R. (2020). High-profile coronavirus retractions raise concerns about data oversight. Nature.
  • Luo, P., Liu, Y., Qiu, L., Liu, X., Liu, D., and Li, J. (2020). Tocilizumab treatment in COVID-19: A single center experience. J. Med. Virol. 92, 814–818.
  • Mehra, M.R., Desai, S.S., Kuy, S., Henry, T.D., and Patel, A.N. (2020a). Cardiovascular Disease, Drug Therapy, and Mortality in Covid-19. N. Engl. J. Med.
  • Mehra, M.R., Desai, S.S., Ruschitzka, F., and Patel, A.N. (2020b). RETRACTED: Hydroxychloroquine or chloroquine with or without a macrolide for treatment of COVID-19: a multinational registry analysis. Lancet.
  • Mehta, P., McAuley, D.F., Brown, M., Sanchez, E., Tattersall, R.S., Manson, J.J., and HLH Across Speciality Collaboration, UK (2020). COVID-19: consider cytokine storm syndromes and immunosuppression. Lancet 395, 1033–1034.
  • Mitra, A.R., Fergusson, N.A., Lloyd-Smith, E., Wormsbecker, A., Foster, D., Karpov, A., Crowe, S., Haljan, G., Chittock, D.R., Kanji, H.D., et al. (2020). Baseline characteristics and outcomes of patients with COVID-19 admitted to intensive care units in Vancouver, Canada: a case series. CMAJ.
  • Oxford Academic (2020). Access to OUP resources on COVID-19, other coronaviruses, and related topics.
  • Paranjpe, I., Fuster, V., Lala, A., Russak, A., Glicksberg, B.S., Levin, M.A., Charney, A.W., Narula, J., Fayad, Z.A., Bagiella, E., et al. (2020). Association of Treatment Dose Anticoagulation with In-Hospital Survival Among Hospitalized Patients with COVID-19. J. Am. Coll. Cardiol.
  • Richardson, S., Hirsch, J.S., Narasimhan, M., Crawford, J.M., McGinn, T., Davidson, K.W., and the Northwell COVID-19 Research Consortium, Barnaby, D.P., Becker, L.B., Chelico, J.D., et al. (2020). Presenting Characteristics, Comorbidities, and Outcomes Among 5700 Patients Hospitalized With COVID-19 in the New York City Area. JAMA.
  • Rothe, C., Schunk, M., Sothmann, P., Bretzel, G., Froeschl, G., Wallrauch, C., Zimmer, T., Thiel, V., Janke, C., Guggemos, W., et al. (2020). Transmission of 2019-nCoV Infection from an Asymptomatic Contact in Germany. N. Engl. J. Med. 382, 970–971.
  • Rubin, E.J., Baden, L.R., Morrissey, S., and Campion, E.W. (2020). Medical Journals and the 2019-nCoV Outbreak. N. Engl. J. Med. 382, 866.
  • Schenck, E.J., Hoffman, K., Goyal, P., Choi, J., Torres, L., Rajwani, K., Tam, C.W., Ivascu, N., Martinez, F.J., and Berlin, D.A. (2020). Respiratory Mechanics and Gas Exchange in COVID-19 Associated Respiratory Failure. Ann. Am. Thorac. Soc.
  • White, D., MacDonald, S., Bull, T., Hayman, M., de Monteverde-Robb, R., Sapsford, D., Lavinio, A., Varley, J., Johnston, A., Besser, M., et al. (2020). Heparin resistance in COVID-19 patients in the intensive care unit. J. Thromb. Thrombolysis.
  • Wichmann, D., Sperhake, J.-P., Lütgehetmann, M., Steurer, S., Edler, C., Heinemann, A., Heinrich, F., Mushumba, H., Kniep, I., Schröder, A.S., et al. (2020). Autopsy Findings and Venous Thromboembolism in Patients With COVID-19. Ann. Intern. Med.
  • Xu, X., Han, M., Li, T., Sun, W., Wang, D., Fu, B., Zhou, Y., Zheng, X., Yang, Y., Li, X., et al. (2020). Effective treatment of severe COVID-19 patients with tocilizumab. Proc. Natl. Acad. Sci. U. S. A. 117, 10970–10975.
  • Ziehr, D.R., Alladina, J., Petri, C.R., Maley, J.H., Moskowitz, A., Medoff, B.D., Hibbert, K.A., Thompson, B.T., and Hardin, C.C. (2020). Respiratory Pathophysiology of Mechanically Ventilated Patients with COVID-19: A Cohort Study. Am. J. Respir. Crit. Care Med.
Copyright (c) MGH FLARE team, 2020. All rights reserved.

This email was sent to <<Email Address>>
why did I get this?    unsubscribe from this list    update subscription preferences
MGH FLARE · 55 Fruit St · Bullfinch 148 · Boston, MA 02114-2621 · USA