Alison Avenell spent years collecting evidence that Yoshihiro Sato, a now-deceased nutritional researcher in Japan, was among the most prolific fraudsters known to science. After journals investigated the findings by Avenell, a clinical nutritionist at the University of Aberdeen, and her colleagues, they retracted more than two dozen papers Sato had co-authored. Many had reported findings from clinical trials that could have led physicians to incorrectly treat patients suffering from osteoporosis and other disorders.
But the retractions, which began in 2015, didn’t mean the papers were gone for good, or that their influence waned.
Avenell noticed many journal articles that cited one or more of the 27 retracted papers did not warn readers that they referenced tainted work. Worse, she and colleagues report in a recently published study, 88 of the articles that cited the retracted papers were systematic reviews and clinical guidelines—potentially influential publications that often help guide medical treatments. Avenell wondered: Would the authors and editors of these papers take action if alerted to the retractions of Sato’s work?
For the most part, she found, the answer was no.
Her team contacted the authors of 86 of the citing papers—and sometimes the editors, too. After a year, however, journals had posted notices or letters for just eight of those papers informing readers that they cited retracted work, the researchers reported in late May in Accountability in Research. In five of those cases the announcement wasn’t linked to the paper, leaving readers in the dark. (A ninth review was itself retracted.)
The saga provides an unusually methodical case study of what some call “zombie papers.” Even after they are retracted—publishing’s death sentence—these papers live on thanks to citations. And that could have real-world consequences, the study suggests. It found 39 of the 88 citing papers had drawn conclusions that, if the retracted papers were left out of the analysis, were likely to be substantially weaker. Journals flagged just four of the weakened studies for citing retracted papers.
The study’s findings are “unfortunately very consistent” with others going back to the 1990s, says Ivan Oransky, co-editor of Retraction Watch, which reports on retracted papers and tracks them in a public database. A 1998 investigation in JAMA, for example, found that 94% of 299 citations to retracted articles still listed in the MEDLINE database did not note the work had been retracted. And “most editors do not seem to make correcting the record a priority,” Oransky says.
Avenell took a very rigorous approach to documenting the problem, Oransky says. For example, her team emailed queries to authors and journals using a randomized, controlled trial design. For some papers, the researchers only contacted the corresponding authors of the evidence syntheses. For others, they contacted an additional two co-authors and sometimes also the journal’s editor-in-chief.
For half of the 86 papers, they got no response. (Looping in the editor didn’t increase the response rate.) Some authors who did respond said they didn’t plan to amend their papers because, for example, the publication was too old, or they didn’t have time to do a reanalysis. Some asserted that the elimination of a single, retracted study likely would not have changed their overall findings. There is some evidence for that position. A 2021 study in Accountability in Research led by Daniele Fanelli of the London School of Economics and Political Science examined 50 meta-analyses of clinical treatments. The conclusions of those that cited retracted work and those that didn’t were statistically similar.
The studies examined by Avenell’s team that were weakened by retracted work could have put patients at risk. One of those reviews, showing vitamin K helps prevent fractures, was the basis of 2011 and 2015 Japanese guidelines that recommend the supplement for people at risk. Omitting Sato’s studies made the reported benefit statistically nonsignificant. The guidelines’ sponsor, the Japan Osteoporosis Foundation, was among those that did not respond to the team’s queries.
Even if a retracted citation doesn’t change the bottom line, Avenell argues, journals and authors have an obligation to say so publicly. “You need to reassure your readers” about a paper’s validity, she says.
Avenell is scheduled to discuss the study—co-authored by Mark Bolland, Greg Gamble, and Andrew Grey of the University of Auckland—in September at the International Congress on Peer Review and Scientific Publication. The work, she says, was spurred by “my frustration with the slow process of correcting the literature that has integrity issues and to demonstrate the potential adverse consequences if it is not corrected promptly.”
There are signs the research community is beginning to take such concerns more seriously. Several bibliographic databases—including EndNote, LibKey, Papers, and Zotero—now note papers that are included in Retraction Watch’s database of retractions, which debuted publicly in 2018. (The popular Google Scholar search engine does not flag retractions.) The International Committee of Medical Journal Editors recommends journal editors routinely check to see whether submitted manuscripts cite retracted papers. And in 2021, Cochrane, a nonprofit international network that promotes evidence-based medicine, began to attach a warning to any of its systematic reviews that cite retracted studies. Cochrane asks the authors of flagged reviews to reconsider their work; then the organization decides whether to withdraw the analysis or publish an updated version with revised findings.
Source: Science Mag