Press "Enter" to skip to content

Quality shines when scientists use publishing tactic known as registered reports, study finds

Registered reports, which peer review methods and analyses before results are known, measure up on quality and creativity.

Andrey_Popov/Shutterstock

In 2013, the journals Cortex, Social Psychology, and Perspectives on Psychological Science launched a groundbreaking publishing format—called a registered report—that they hoped would solve several problems worsened by conventional publishing practices. One issue was that many journals declined to publish important negative results, judging them not sufficiently novel. In addition, many authors analyzed their data in multiple ways but only reported the most interesting results.

The trio of journals thought registered reports offered a better way. The approach turns the normal publishing timeline on its head: Authors write manuscripts laying out only their hypotheses, research methods, and analysis plans, and referees decide whether to accept them before anyone knows the study’s results. The innovation is that this guarantees publication for even the most mundane findings. Unlike standard papers, “the decision [to publish] … is based on the importance of the question, and the quality of the methodology you’re applying,” says Brian Nosek, a psychologist at the University of Virginia and an advocate of registered reports.

But until recently, concrete data to support the benefits of this publishing model have been thin. Today, Nosek and his colleagues published a paper in Nature Human Behaviour reporting that reviewers rate registered reports as more rigorous, and their methods as higher in quality, than similar papers published in the standard format. And despite concerns that the approach could stifle research creativity, the reviewers considered registered reports to be as creative and novel as the comparison papers. The findings join the first small wave of studies exploring whether the publishing format—now offered by at least 295 journals—lives up to its promise.

To compare the two formats, Nosek and colleagues recruited 353 reviewers from psychology faculties in the United States and Europe. The team matched them with published registered reports, by subdiscipline. Each reviewer was asked to evaluate a report and a matched, standard paper from the same journal or authors. The reports were scrubbed of references to their format, and the team excluded any that reported replications, which are much less common in standard publications. That left a small sample of just 29 psychology and neuroscience registered reports published between 2014 and 2018, and 57 matched comparison papers.

On measures of quality, reviewers gave glowing ratings to the reports. They considered their methods and analyses more rigorous, the research questions higher quality, and the discoveries more important. And they considered the two kinds of papers equivalent on measures of creativity and novelty.

That’s a “surprising” result, says David Peterson, a sociologist of science at the University of California, Los Angeles, because “one of the common critiques of preregistration is it leads to duller studies.”

That concern—that preregistration could stifle the creative exploration of data that leads to more robust hypothesis testing—recently led the National Institutes of Health to steer clear of requiring preregistration in NIH-funded animal research.

The new analysis is a thoughtful piece of research, says Tom Hardwicke, a meta-researcher at the University of Amsterdam who has studied registered reports. But it’s “difficult to draw strong conclusions” from its results, he says. Despite the researchers’ efforts, the reviewers could not be properly blinded to each paper’s format—registered reports just have too many differences from standard papers. “It’s good to see that advocates of [registered reports] are attempting to empirically evaluate the approach,” says Aba Szollosi, a psychologist at the University of Edinburgh. But the blinding issues “undermine the main conclusion the authors draw,” he says.

Nosek’s paper adds to a small group of other studies that have found differences between the two types of paper. For example, an article published on 16 April in Advances in Methods and Practices in Psychological Science found that only 44% of a sample of registered reports in psychology confirmed their hypothesis, versus 96% in a sample of the wider psychology literature. The higher success rate in the wider literature suggests it is rife with publication bias and selective reporting, says lead author Anne Scheel, a Ph.D. student at the Eindhoven University of Technology. Even if researchers somehow set out to test only true hypotheses, she says, it’s unlikely that their methods and samples would be perfect enough to nearly always find positive results. And the lower hypothesis confirmation rates in registered reports is an indication that they work as intended, Scheel says: They allow for a range of results, positive and negative, to see the light of day.

But Scheel also suggests caution in interpreting studies of registered reports, including her own. Their reliability is limited by small sample sizes; too few papers have been published in that format to allow robust analyses, she says. And the results may not be representative because authors and editors who have been early adopters are likely highly motivated to improve rigor.

Extrapolating from psychology and neuroscience to other disciplines is also difficult, researchers say. Scheel believes registered reports are likely to have their greatest impact in improving quality in fields that have had extensive problems with replication, such as psychology. But, “There are reasons to doubt that registered reports will have a similar effect in fields where evidence of a replication crisis is weak,” Peterson says.

For now, the promise of registered reports has to rest on small studies, Nosek says. He plans to conduct more robust studies of the effects of registered reports, for example through a large, randomized, controlled trial—but he needs to secure funding. “We just want to know if it works,” he says. The point of the reform is not to fixate on registered reports, he adds: It’s to make research more effective. “And if the solutions we’re trying aren’t working, we want to change them.”


Source: Science Mag