Bad science is a lot like a virus. It starts small, but if it’s shared enough times, it can cause global disruption.
You may remember a kerfuffle last month over whether it is safe to take ibuprofen to treat coronavirus symptoms. It is a prime example of how a series of unfortunate errors can lead to bad health policy. It began harmlessly, when the Lancet Respiratory Medicine, a respected journal, published a 400-word letter from a group of European researchers that raised some safety concerns about the drug. It was an opinion piece that was, according to at least one of the authors, a hypothesis, not a medical recommendation. But much of the world treated the mere suggestion as if it were derived from the results of a clinical trial. Within a week, the French health minister, followed by a spokesperson from the World Health Organization, recommended that people with COVID-19 don’t take ibuprofen. A day later, after pushback from doctors and scientists, the WHO backtracked, saying it did “not recommend against” the use of ibuprofen. The move sparked widespread confusion, and for an institution that we’re all relying on for solid information, it was not a great look.
Good science requires time. Peer review. Replication. But in the past few months, the scientific process for all things related to COVID-19 has been fast-tracked. While that is, of course, understandable on some level—thousands are dying worldwide every day, after all—it’s not necessarily safe. What was once a marathon has been compressed to a 400-meter dash: Researchers race to deliver results, academic journals race to publish, and the media races to bring new information to a scared and eager public. And, at the same time, unverified opinions circulate widely on social media and on TV from so-called experts, which makes understanding the situation all the more difficult.
Bad science—or at the very least incomplete science—is simply slipping through the cracks.
Conducting and communicating research swiftly during a pandemic is, on its own, a positive thing. But “there’s speed, and then there’s rushing too much. As we always say, ‘Fast but wrong is just fast,'” says Dr. Irving Steinberg, an associate professor of clinical pharmacy and pediatrics at the University of Southern California School of Pharmacy and Keck School of Medicine. (Steinberg wrote an article outlining the ibuprofen disaster in the Conversation, a nonprofit newsroom that publishes articles written by academics, earlier this month.)
To be clear, what I’m talking about here is more than over-the-counter pain killers, more than trolls on social media spreading conspiracy theories, and even much more than a president and some Fox News personalities touting unproven malaria drugs or bleach injections. It’s a fundamental threat to trusted information outlets—academic journals, the media, and global health institutions—and, in turn, our own well-being.
Around the same time as the ibuprofen case, for instance, some bad information apparently circulated among President Trump’s advisers. During a press conference on March 17, a reporter asked why the United States didn’t import tests from the WHO at the start of the pandemic. When the White House coronavirus response coordinator, Dr. Deborah Birx, stepped in to answer, she cited a study published in a Chinese journal about the high false-positive rate of a Chinese diagnostic test: “It doesn’t help to put out a test where 50 percent or 47 percent are false positives,” she said. But, as NPR first reported, the study Birx cited was retracted just days after it published on March 5. On top of that, there is no evidence that the WHO tests were faulty, and NPR found that the 47 percent number, which appears in the abstract for the paper Birx mentioned, doesn’t even refer to the overall accuracy of the test; it instead refers to the false-positive rate among asymptomatic individuals who had close contact with COVID-19 patients. (The study appears to have been removed from its original URL, though the abstract still exists on the US National Library of Medicine’s PubMed database, where it’s been indicated that the paper was “withdrawn.”)
It’d be foolish to base any major health policy on one scientific study and it’s unclear if this study played a role in the country’s fiasco over testing—widely regarded as a major failure of the administration’s COVID-19 response—but it’s nonetheless alarming that it was repeated as fact by the very people we’re trusting to lead our country through the pandemic. That said, the mixup isn’t entirely Birx’s fault; after all, the study was published in a journal after peer review and it wasn’t marked on PubMed as withdrawn until weeks after the retraction occurred. The real problem here is that this study even had the prominence it did.
As the co-founders of Retraction Watch, a blog that tracks academic retractions, wrote in a recent article for Wired, the case involving Birx “is a particularly dismaying and consequential example of what happens when no one bothers to engage in scientific fact-checking.”
“But,” they cautioned, “it will not be the last time that something we thought we knew about the coronavirus because it was in a published paper will turn out to be wrong.”
Part of the problem, experts tell me, is that the pandemic has brought on an absolute flood of manuscripts for review by academic journals, which editors are working around the clock to process. That, mixed with the public’s hunger for new information and surging existential dread, is something like the perfect storm.
Normally, it can take several weeks to months for a study to go through the peer-review process. There isn’t hard data on how long it takes now, but we can assume it’s significantly shorter. According to Primer, a San Francisco–based machine intelligence company that is tracking COVID-19 research, there have been more than 8,500 COVID-related papers published over the course of the outbreak. A little over half of those studies were published on databases known as pre-print servers, meaning they haven’t yet been formally reviewed; still, the media has picked up many of them before thorough vetting.
“Without a doubt,” according to Dr. Howard Bauchner, editor-in-chief of the Journal of the American Medical Association, the “most significant day-to-day challenge” brought on by the coronavirus is the sheer number of manuscripts the journal receives. When we spoke last week, Bauchner said the journal had received 235 COVID-19-related manuscripts for consideration just the day before. “We’ll receive another hundred today,” he said. Since January 1, JAMA has received about 2,000 research, opinion, and clinical papers. They’ve published about 100.
“I’m watching my email fill—I can let you hear every ping,” he said over the phone. “Every ping means another manuscript for me to look at.”
The Science editorial team, representing Science and the five other journals published by the American Association for the Advancement of Science, told me in an email that “many of the COVID-19 submissions coming to the Science family have been rushed and don’t meet our standards for publication and broad dissemination.” Since January 1, 2020, it said, Science has received more than 320 COVID-19-related research submissions, “with a steady rise in submissions most every week since that date.” As of April 17, it had published 14 of these papers.
Both journals, along with Springer Nature, which publishes Nature and thousands of other journals, assured me that although the process has been sped up, they’re maintaining their typical peer-review standards. But it’s too soon to tell how successful the peer-review process will be in keeping the worst science out, both in these publications—among the most influential science journals in the world—and others. On average, according to Retraction Watch co-founder Dr. Ivan Oransky, the amount of time between when a paper is published and withdrawn is about three years. As of Tuesday, at least six COVID-19 studies or letters have been retracted, according to Retraction Watch‘s database.
Retractions aside, the situation raises broad concerns about the rigor of published research itself. “What [the pandemic] has done is just made everyone rush to publication and rush to judgment, frankly,” says Oransky, a non-practicing medical doctor who is also the vice president of editorial at medical news and reference site Medscape and teaches medical journalism at New York University. “You’re seeing papers published in the world’s leading medical journals that probably shouldn’t have even been accepted in the world’s worst medical journals.”
Earlier this month, for instance, the New England Journal of Medicine published a 61-person study for an antiviral therapy called remdesivir to treat COVID-19. Of the 53 patients whose data could be analyzed, 36 saw improvement after 10 days of treatment. There was no control group, and Gilead Sciences, the company that developed remdesivir, funded and conducted the study and helped write the first draft of the manuscript. The study was by no means fraudulent, but it presented a clear conflict of interest. “I think it’s a good thing that it’s all out there and people are able to look at it,” Oransky says, “but it’s so inconsistent with what the New England Journal of Medicine claims that it’s always about.”
When I asked the New England Journal of Medicine about this study and if the pandemic has affected its peer-review process, it told me in an email, “For Covid-19, everything is expedited tremendously and a process that can take weeks has been condensed to 48 hours or less in many cases.” It added, “Our goal is always to help provide physicians with the information they need to care for patients. In many cases, that means publishing early phase studies that can help caregivers understand what is on the horizon.”
I myself almost got caught up in this perfect messy-science storm earlier this month. I came across a press release for a study from a researcher at the University of Ottawa in Canada, which suggested that the coronavirus may have spread to humans via stray dogs. I had a few days to review the paper before it would publish in the peer-reviewed journal Molecular Biology and Evolution. I contacted the author and we conducted an interview by email. The results seemed fascinating, and I generally trust the peer-review process to weed out problematic papers. But when I reached out to independent experts about the study, all three I connected with told me they had significant concerns about the research. The conclusions of the study were unwarranted, David Robertson, a virologist at the University of Glasgow in the UK, told me. “I do not know how this has passed peer review particularly for such a reputable journal as MBE,” he said in an email.
The responses surprised me; I slacked my editor, “I’ve never had experts refute a study like this before.” We decided to abandon covering the study, but other media outlets wrote it up. CNN, meanwhile, did cover the study, but with a headline that cut right to the controversy—”Stray dogs and coronavirus: Just a hypothetical theory with no proof”—and quoted a researcher who shared similar concerns to the experts I’d spoken with: “I do not see anything in this paper to support this supposition,” James Wood, head of the department of veterinary medicine at the University of Cambridge in the UK, told CNN, “and am concerned that this paper has been published in this journal.”
The author of the paper, Xuhua Xia, responded by arguing that the media misreported his study. He told me in an email, “The paper was reviewed by five referees who are mostly excited about the paper. MBE is a flagship journal of our society and does not accept papers lightly. If Mr. James Wood or whoever has objections to the paper, he is welcome to submit a critique of the paper to MBE or other scientific journals, and I would certainly respond.” He also suggested that Wood commented on his research in a bid for attention.
Some sources tell me press coverage in general—even if it includes criticism—can be a win for academics, especially those seeking tenure. But for nonacademics, this bothsidesism, as it is called, without a clear headline and framing, can create a mess, masking critique and leaving readers confused about what the truth is. For instance, Maciej Boni, an associate biology professor who specializes in mathematical modeling and epidemiology at Penn State University, says, “Sometimes an article by a science writer can pit the ideas of an evolutionary biologist against an anti-evolutionist, making it seem like both sides have equal merit.”
“I trust the peer review system,” Boni adds. “I think it works pretty well. And it’s in times like this, that there is some strain put on the peer review system and you see studies sneaking through that shouldn’t.”
Prominent science journalist Ed Yong, who has written about the coronavirus pandemic for the Atlantic, notes that bad science has always slipped through: “Peer-review is clearly important to the scientific process, but it is nowhere near a perfect shield against bad work being published.” But the pandemic, he says, has just sped everything up—and the stakes are higher.
“Under normal circumstances, science is always like this,” Yong says. “Bad studies get published. People talk about them in formal settings and informal ones, like Twitter, and we sort of gradually approach a better sense of what is actually going on. We’re seeing the same thing now, it’s just happening a lot faster.”
Making matters muddier, nonspecialists, often with bad information, are adding their own voices to the noise. This, of course, is not exactly a new phenomenon. But during the age of the coronavirus, there seems to be a rise in both Homeroom Chads widely offering their baseless opinions on your social media feeds, as well as academics or specialists in unrelated fields positioning themselves as experts on the spread and treatment of COVID-19. For this latter group, their otherwise solid credentials can make it difficult for the lay public to know who to trust.
The most prominent case might be right inside the White House. Earlier this month, in a private meeting, trade adviser Peter Navarro, who has no medical credentials, reportedly disagreed with the nation’s top infectious disease expert, Dr. Anthony Fauci, over concerns about antimalarial drug hydroxychloroquine. While Fauci has cautioned the public not to jump ahead of the science on hydroxychloroquine, Navarro reportedly disagreed. When CNN anchor John Berman later asked Navarro what qualifications he has to weigh in on this matter, he said, “I’m a social scientist. I have a PhD and I understand how to read statistical studies.” Since then, the FDA issued a warning against the use of hydroxychloroquine as a COVID-19 treatment outside of a hospital or clinical trial.
“When Peter Navarro, standing on a White House platform, says, ‘Well, I understand this stuff, because I’m a social scientist,’ he’s doing a disservice,” says Andrew Rosenberg, director of the Center for Science and Democracy at the Union of Concerned Scientists. “If he wants to say, ‘Here’s my source of information, here’s the experts who work in this field, and here’s how I understand what they’re telling me,’ that’s a different matter. But that’s not what he’s doing.”
The trend isn’t limited to the administration. “In a normal news cycle, science gets like less than 1 percent of the coverage,” Boni says. “But in today’s news cycle, the only thing in the news is science. So everybody, every economist, every physicist, every tech guru, everybody wants a piece of the attention if they think they can link themselves or link their own pastime, their background, to something that’ll be covered.”
For Boni, economists are particularly unrelenting offenders: “If all the country’s bridges fell down, the economists would get right behind the civil engineers and tell them exactly what to do. I mean, it’s absolutely amazing that they think they have a role in this discussion.” For example, Boni tells me about hearing an episode of the podcast Deep Background, hosted by Harvard Law professor Noah Feldman, with Nobel Prize–winning economist Paul Romer as a guest. Romer argued that epidemiologists are refusing to “do the math” and aren’t doing nearly enough testing in the US. The criticism, coming from an economist, didn’t sit well with Boni. “I could tell he had the right intentions,” he says. “But seeking him out as an expert, that’s 35 minutes of podcast time that Noah Feldman didn’t talk to another real expert.” He argues that epidemiologists should be the ones making public health recommendations, not economists. (Romer pushed back on this in an email to me, arguing that the policy response to the coronavirus is an inherently interdisciplinary problem, and that it is “appropriate” for economists to weigh in on the subject.)
Other experts I spoke with agreed that nonspecialists are getting unwarranted time in the spotlight, though not all felt comfortable naming individuals on the record. As JAMA‘s Bauchner puts it, “What I have been surprised by—and I will not name names—is the prominence that some of the major media outlets have given people whom I think may be speaking in areas in which they are not noted experts.”
There’s a long-running joke in journalism that if you read the news, one week, coffee is good for you; the next week, it causes cancer. “At the end of the day, I actually think that’s all terrible journalism, but it probably won’t kill anybody,” Oransky says. But the same isn’t true about all COVID-19 coverage. “If you do the same kind of whiplash with people, which you do see journalism in science doing, you could kill people. One week, this drug is the perfect thing. The next week, it’s going to kill you.”
This is essentially what happened in the case of ibuprofen: a vague idea prompted a recommendation from arguably the world’s leading health authority, followed by a reversal of that recommendation the following day. It was the coffee joke but on a very real and high-impact scale. So who do you blame? The scientists for writing the letter? The journal for publishing it? Journalists for covering what seemed to be a newsworthy event? The health minister and the WHO for doing what they thought would save lives? Of all the experts I spoke to, no one had a good answer.
I realize I too am not a scientist, and I probably shouldn’t share my nonexpert takes either, but bear with me here anyway; you’ve read this far. The way I see it is that at every step of the process, there is a tradeoff between speed and accuracy. It’s like a sliding scale: On one end you are totally certain and slow. On the other you are totally uncertain and fast. From a public health perspective, neither is optimal during a pandemic. But whether you’re fast or slow, any miscalculation can cost lives. As Oransky puts it, “If we get this wrong, more people die—and that’s true of social-distancing measures, it’s true of treatments, it’s true of vaccine development. And when I say ‘we’ I don’t mean journalists or scientists. I mean we as a people, as a society.”