This week, results from a study of the antiviral drug remdesivir showed that the drug reduced recovery time in Covid-19 patients. And today (May 1), President Trump announced that remdesivir has been authorized for emergency use in coronavirus patients by the Food and Drug Administration. But scientists are still waiting to review the details of the study that supports this decision.
Here’s what we know: On April 29, the US company that makes remdesivir, Gilead, announced in a press release that it was “aware of positive data emerging” from a remdesivir study being run by the US National Institute of Allergy and Infectious Diseases (NIAID). A couple of hours later, NIAID’s director Anthony Fauci said the 1,063-person study found that patients given remdesivir recovered in 11 days on average, compared with 15 days for those given placebo. Based on these results, Fauci said he expects remdesivir will become the “standard of care” for coronavirus patients.
The announcement was complicated by a smaller peer-reviewed study, published on the same day in the Lancet, showing that a smaller group of patients who received remdesivir in China did not recover faster than those on placebo. And Gilead said its own study showed that five days worth of remdesivir worked just as well as 10 days—but, crucially, the study didn’t compare the drug to a placebo. Without that, there’s no way to tell how well remdesivir works.
This cluster of announcements is far from the norm. Scientific research—especially medical research—is typically conducted, evaluated, and published on the scale of years, not weeks. During the Covid-19 pandemic, the communication and interpretation of these results carries particularly high stakes. And so scientists are posing an important question: How can they disseminate research in a manner that reduces harm and leads to the best health outcomes?
The unorthodox press release sharing the NIAID results reflects a desire to quickly disseminate results to the public. It’s the same urgency that has driven researchers to upload much of their coronavirus research to open-access servers as “preprints.”
Traditionally, preprints allow researchers to get informal feedback from other scientists before submitting a paper for peer review. “There’s definitely space to announce things ahead of peer review,” says James Heathers, a research scientist at Northeastern University who frequently calls out questionable research methods. “It’s designed to go into a forum where the people reading it are interested in that academic topic,” he says.
For coronavirus, though, it’s hard to release preprints quietly: “Everyone’s interested in the academic topic,” says Heathers.
This scrutiny only increases the importance of being fully transparent about a study’s methods and results. “If you have data, you may want to act on it quickly—but only if that data is reliable,” says Holly Fernandez Lynch, a medical ethics professor at the University of Pennsylvania. Although peer review is the gold standard, she says researchers can conduct a form of peer review in real time on social media, as long as they have access to the data—which, in the case of the NIAID trail, they didn’t.
Releasing the full data is essential to allow scientists to understand the study. For instance, Heathers points out, the researchers changed the primary outcome that the study claimed to evaluate. Originally, the study was designed to rate participants’ health on day 15 and day 29 of treatment. Now, it’s focused on how many days it takes to recovery. There can be good reasons for changing this variable—NIAID provided an explanation on Friday, after the drug was authorized—but, without the full dataset, outside scientists can’t make sense of the change.
“As much as people respect and admire and trust Dr. Fauci, they’d prefer not to take his word for it that this is promising, but to see the data themselves,” says Lynch. “Even Tony Fauci needs peer review.”
Alex John London, who has written on the importance of peer review on coronavirus research, warns that it’s difficult for scientists to correct public perception after “extraordinary claims” have been widely reported. “When those really striking claims turn out not to be warranted, the toothpaste is already out of the tube, the striking claim has already been reported,” says London, the director of the Center for Ethics and Policy at Carnegie Mellon University. “So science corrects itself, but the public perception has already been formed.”
The publicity around coronavirus research also carries significant weight, as public perception can influence clinicians’ ability to conduct research. Enthusiastic statements about remdesivir—in particular, Fauci’s claim that the drug will become the new standard of care—have ramifications for ongoing trials.
If the standard of care switches to remdesivir, then all future trials will have to test against the drug, which could create delays in studying which drugs prevent death. And ongoing studies evaluating drugs against the current standard—of supportive care but no drug treatment—may struggle to recruit patients if they think remdesivir will help. “It heightens the pressure on having those conversations with patients,” says Lynch. “It’s going to make it challenging to run clinical trials about other products and ask other questions about remdesivir.”
If remdesivir is effective, patients should receive it as quickly as possible. “Now we have data from a well-controlled study that says there’s reason to prefer remdesivir to standard supportive care,” says Lynch. But rushing out the first treatment makes it more difficult to advance the second and third drug studies. “One study is better than no studies, but not as good as two studies,” says Lynch.
That’s especially true because the NIAID study didn’t show that remdesivir reduced deaths. “The end point that we really care about for Covid is mortality,” says Lynch. “This product [remdesivir] has not demonstrated an impact on mortality.”
Given the delicate ethical balance of creating effective treatments as quickly as possible, Lynch says it’s important to ensure that public statements around study results are balanced. “We live in a culture of soundbites. The soundbites are important because people face information overload,” she adds. And so, even when announcing promising remdesivir results, she says it’s important to highlight the next step of research and questions that are still open.
Meanwhile, Heathers says that if Gilead or NIAID (or any other researchers) are planning more trials, they should make sure that the conditions are in place so that data can be inspected. Researchers should ensure they have permission from all participants to release the data; failing to do so is a “classic dodge” that keeps data secret, he says. In addition, companies often claim that this valuable data is proprietary information that can’t be released. The solution to this is for the data to be released to scientists who sign non-disclosure agreements. “You can have something scrutinized without it being publicly released,” he says.
Setting up the necessary conditions may take a little extra time, but it’s essential to accurate inform research and treatment. “The actual value of any study is in the data,” says Heathers. “It’s not the press release, or preprint, or even the study. It’s the data.”