Press "Enter" to skip to content

‘Journalologists’ use scientific methods to study academic publishing. Is their work improving science?

By Jennifer Couzin-Frankel

They came to Chicago from across medicine and around the world, converging on a dingy downtown hotel to witness the birth of a new field. It was a chilly May week in 1989. Guests muttered about clogged bathtubs and taps that ran cold, while a bushy-bearded Drummond Rennie, a deputy editor of The Journal of the American Medical Association (JAMA), hurried the crowd away from morning bagels and coffee and into the meeting hall.

Related stories on metaresearch

A British nephrologist who scaled mountain peaks in his spare time, Rennie had moved to a Chicago hospital from London in 1967. But while studying how the thin oxygen of high altitudes affects the kidneys, he became interested in the world of scientific publishing. Curious about how scientists report their work and how editors make their decisions, Rennie took a job at The New England Journal of Medicine in 1977, and later switched to JAMA.

He wasn’t impressed by what he found. JAMA “was in an utter shambles. It was a laughing stock,” he recalls. “Papers were ridiculous. They were anecdotes following anecdotes.” And he felt it wasn’t just JAMA—it was much of medical publishing. Some trials read like drug company ads. Patients who dropped out or suffered side effects from a drug were not always counted. Other papers contained blatant errors. Yet, all of these studies had passed peer review, Rennie says. “Every one had been blessed by the journal.”

Which is why Rennie, with support from his editor-in-chief, George Lundberg, dreamed up the inaugural Peer Review Congress. He wanted to turn the lens of science onto the journals themselves. There were plenty of issues in publishing that could be studied by using scientific methods, Rennie reasoned. Are positive outcomes more likely to be published than negative ones? A study that artificially changed the results of a clinical trial but left the rest of the paper intact could offer insight. Did peer review improve a paper’s quality? A rigorous comparison of submitted papers and the published versions might provide an answer.

The Chicago, Illinois, meeting marked the birth of what is now sometimes called journalology, a term coined by Stephen Lock, a former editor of The British Medical Journal (The BMJ). Its goal: improving the quality of at least a slice of the scientific record, in part by creating an evidence-based protocol for the path from the design of a study to its publication. That medical journals took a leading role isn’t surprising. A sloppy paper on quantum dots has never killed anyone, but a clinical trial on a new cancer drug can mean the difference between life and death.

There are some dark corners in the way journals work that need to have some light shone on them.

Richard Horton, The Lancet

The field has grown steadily and has spurred important changes in publication practices. Today, for example, authors register a clinical trial in advance if they want it considered for publication in a major medical journal, so it doesn’t vanish if the results aren’t as hoped. And authors and journal editors often pledge to include in their papers details important for assessing and replicating a study. But almost 30 years on, plenty of questions remain, says clinical epidemiologist David Moher of The Ottawa Hospital Research Institute, a self-described journalologist. Moher—who once thought his dyslexia explained why he couldn’t understand so much published research—wants to know whether the reporting standards that journals now embrace actually make papers better, for instance, and whether training for peer reviewers and editors is effective.

Finding the answers isn’t easy. Journalology still hovers on the edge of respectable science, in part because it’s often competing with medicine for dollars and attention. Journals are also tough to study and sometimes secretive, and old habits die hard. “It’s hard,” Moher says, “to be a disruptor in this area.”

The early detective work

Early on, Rennie wasn’t alone in his qualms about scientific publishing. Kay Dickersin, an epidemiologist now at the Johns Hopkins University Bloomberg School of Public Health in Baltimore, Maryland, had been appalled by the quality of research published in the journals her doctor father left lying around the house. On the other side of the Atlantic Ocean was Iain Chalmers, a health services researcher at the University of Oxford in the United Kingdom, who had trained as an obstetrician and pediatrician. After 2 years at a clinic in the Gaza Strip in 1969 and 1970, he’d developed nagging fears that the practices taught in medical training weren’t always evidence-based.

Dickersin and Chalmers were both worried that many important clinical trials were never published. To test that suspicion, they first tried to dig up unpublished trials in obstetrics and neonatology. With their colleagues, the pair asked 40,000 clinicians, obstetricians, midwives, and neonatologists in 18 countries whether they knew of other unpublished trials. Only a couple hundred people responded, and they identified 18 trials. “We knew that there were more,” Chalmers says; smaller surveys had suggested at least 20% of trials are never published.

There was another big concern: Were the studies that were quietly filed away more likely to show that a treatment had failed—a phenomenon known as publication bias that could skew the scientific literature? A survey by Dickersin and colleagues of 318 authors of published trials validated these fears. The 156 respondents acknowledged 271 unpublished trials; only 14% of those favored the treatment being tested, whereas 55% of the published studies did, a pattern confirmed by other studies. “The decision as to what to include in a publication and whether to publish is largely personal, although dictated by the fashion of the times to a certain extent,” Dickersin wrote in 1990 in JAMA. Not publishing, Chalmers charged in the same issue, was akin to scientific misconduct.

When journals act together, they can really change behavior.

Peter Doshi, University of Maryland School of Pharmacy

The two papers electrified many medical researchers, engendering debate and prompting more study. That both appeared in JAMA was no accident: From his perch as an editor, Rennie thrived on punching holes in the veneer of scientific publishing. Not particularly diplomatic by nature, he declined to take on the prestigious post of editor-in-chief at JAMA or anywhere else, in part to allow him to “fight battles that go on underground,” he says.

Some welcomed the scrutiny. “There are some dark corners in the way journals work that need to have some light shone on them,” says Richard Horton, editor-in-chief of The Lancet in London since 1995. But not everybody agreed. “Some people … wished we’d shut up,” Chalmers says. One colleague accused him of being part of “an obstetric Baader-Meinhof Gang,” a reference to the anarchists behind a string of bombings in West Germany. “You point out problems with the science which don’t always make you popular,” agrees Lisa Bero, a pharmacologist and health policy specialist now at The University of Sydney in Australia, who has spent decades studying issues such as evidence-based medicine and bias.

Pushing for reforms

But unpopularity didn’t stop them. To fight reporting bias, journalologists began to urge journals to require that clinical trials be publicly registered at inception, making it impossible to keep them secret. Journals declined to take action, until scandal forced a sea change. In 2004, the New York attorney general’s office sued drug giant GlaxoSmithKline, alleging that four unpublished trials showed that the antidepressant Paxil increased the risk of suicidal tendencies in young people. (The case was later settled.) That same year, the International Committee of Medical Journal Editors began to require trial registration. Since then, more than 280,000 trials have been posted on ClinicalTrials.gov and elsewhere. “When journals act together, they can really change behavior,” says Peter Doshi, a BMJ associate editor and health services researcher at the University of Maryland School of Pharmacy in Baltimore.

Journalology drove other reforms. Several studies reported that published clinical trials often left out key details, prompting journal editors and others to release the Consolidated Standards of Reporting Trials (CONSORT). It included a 21-point checklist of such basics as how the sample size was chosen, any changes made to the trial design after the trial began, and the effects of the treatment on a patient’s health. To date, nearly 600 journals have pledged to require authors to follow the checklist, which has been periodically updated.  In 2010, a similar set of guidelines for animal studies, ARRIVE, was released. Embracing such standards is one thing. Adhering to them is another. A study published in PLOS ONE in May reported that many journals that had pledged to follow ARRIVE’s guidelines did not comply. Even requiring researchers to fill out and submit the checklist after filing their manuscript did little to improve adherence, a trial by researchers at The University of Edinburgh showed.

The field of journalology has highlighted important problems in scientific literature and triggered reforms in academic publishing.

(Click on the timeline and scroll right to see more.)

J. YOU/SCIENCE

Even when new publication rules are faithfully followed, it’s hard to measure their impact. “Someone once asked me at a meeting how many lives CONSORT had saved,” says Douglas Altman, a medical statistician at Oxford and pioneer in the field. (Altman died in June at 69 years old, a couple of months after speaking with Science.) “I was tempted to say something incendiary. Something that leads to a small raising of awareness and methodological rigor is a good thing. Maybe some people’s lives have been saved—but who knows.”

It’s equally unclear whether studies of peer review have improved the process. Researchers have examined, for instance, whether offering reviewers extra training, publishing reviewers’ names, or adding statements detailing the authors’ conflicts of interest improve the quality and honesty of reviews. These changes have passionate advocates. “Do we want a secretive culture where it’s OK to write an anonymous review that trashes your colleague, or do we want a system where everyone has to be accountable?” asks Virginia Barbour, who was an editor at The Lancet, helped found PLOS Medicine, and now works at Queensland University of Technology in Brisbane, Australia. “I feel very strongly that it should be accountable.” But ethics aside, there is no evidence that the changes tested so far have improved the quality of papers.

Steven Goodman, a clinical epidemiologist at Stanford University in Palo Alto, California, and a senior statistical editor at the Annals of Internal Medicine, agrees it’s hard to rigorously measure progress. “It’s not that it’s not a science,” he says of journalology, but “it’s very hard to say what the effect of any particular intervention is on the overall knowledge base.” Moreover, even if one journal betters itself, the research it rejects may end up somewhere else. “You’re squeezing a balloon,” Goodman says, and shifting air—or in this case lousy research—from one place to another.

A changed publishing landscape

Today, the world of publishing is changing rapidly. Predatory journals that release articles with little or no peer review have surged. Papers are posted as “preprints” at the same time as they are submitted to journals or even earlier, allowing others to comment. On the F1000Research publishing platform, authors submit their papers and the articles get posted after some basic checks; only then are they peer reviewed, publicly.

Publication science is struggling to keep up. “Research in this area is not fast-moving,” says Sara Schroter, a senior researcher at The BMJ. In a recent Nature opinion piece, Rennie called for rigorous studies to demonstrate the pros and cons of many new developments, including open peer review and preprints. In JAMA, he and Executive Managing Editor Annette Flanagin lamented that few people are studying “important issues and threats to the scientific enterprise, such as reproducibility, fake peer review, and predatory journals.”

One big factor holding back the field: money. “If you approach government funders, foundations, often they’re asking what diseases you’re curing,” says An-Wen Chan, a skin cancer surgeon and scientist at Women’s College Research Institute in Toronto, Canada. “It is hard to convince [them] that this is directly impacting patients.” (Chan believes it is: His research has shown that clinical trials in top journals often withhold or massage important information, which he says can lead doctors to prescribe the wrong treatment.)

Some grant reviewers don’t even believe journalology is a science, says Larissa Shamseer, a postdoc at the University of Maryland who works with Doshi. Her proposal to study whether papers in predatory journals are more often negative than positive was part of a fellowship application that was recently rejected. “One of the peer reviewers said, ‘This looks like a nice set of activities but where’s the research in it?’” Shamseer says. “I think it’s really hard for people to wrap their heads around using scientific methods to study publishing.”

I think it’s really hard for people to wrap their heads around using scientific methods to study publishing.

Larissa Shamseer, University of Maryland

The dearth of funding is a problem for journals as well. “We’re sometimes asked to take part in research studies, but there are no resources offered to help us dedicate time and staff to that work,” Horton says. Other times, journal policies to protect reviewers’ identities or shield communications with authors hamper research. Or journals may simply decline to offer up information to outsiders. When Peter Gøtzsche, who directs the Nordic Cochrane Centre in Copenhagen, was studying whether abstracts in journals with more industry ads were more likely to put a positive spin on the results, several journals declined to cooperate. There “is a level of secrecy that stinks,” Gøtzsche says.

Still, journals are sometimes happy to help. The BMJ is now pairing up with researchers at Maastricht University in the Netherlands, where the journal’s editor-in-chief has an honorary professorship, to set up a new Ph.D. program on responsible conduct in scientific publishing. The BMJ will share data from its roster of 60 journals for a range of studies on such topics as peer review, preprints, and patient and public involvement in the review process.

As godfather of the field, Rennie can reel off its limitations—but he also delights in how far it’s come. Last fall, Chicago was home to the eighth edition of his meeting; it drew 600 attendees, roughly twice as many as the first. The number of abstracts had quintupled to 260, covering everything from journal data-sharing policies to gender bias in peer review. More than half the presenters were women, compared with 13% 29 years ago.

Now 82 years old and semiretired in rural Oregon, Rennie says it was his last Congress. Younger colleagues organized a session there to pay him tribute, ending in three standing ovations. With funding scarce and journals under pressure, “it has required a colossal push over the years,” to get people to do this work, he says. Was it all worth it? Absolutely. “It’s been exhausting, and exhilarating,” he says, and now it’s up to others to carry the torch.

Source: Science Mag