Press "Enter" to skip to content

Got data? Survey of 2017 March for Science doesn’t make the grade

Tens of thousands braved the rain last year in Washington, D.C., for a March for Science that will be repeated this weekend.

B. Douthitt/Science

By Jeffrey Mervis

A group of researchers has released the first results of a large survey of those who participated in and supported last year’s March for Science. Some social scientists say the analysis is fundamentally flawed and reflects poorly on an organization that champions scientific rigor. March organizers acknowledge the survey’s limitations but say it has provided them with important insights into what motivates their supporters.

The volunteer organizers of the 22 April 2017 march, an ambitious experiment in global science advocacy, were eager to learn all they could about the more than 1 million people who had participated. So, 6 weeks after the event, they notified their more than 200,000 supporters that a survey developed by researchers at George Mason University (GMU) in Fairfax, Virginia, was available online. The 72-question survey asked for demographic information, as well as why respondents had marched and what they thought about government policies and public attitudes toward science.

Last week, days before the second annual march on 14 April, the GMU researchers posted the results. A solid majority of the 20,000 respondents said they thought the country was headed in the wrong direction, a situation almost all blamed on the policies of President Donald Trump and the Republican-led Congress. Their biggest fears were that those government officials would disregard scientific evidence and cut research funding, although only about half thought the march would forestall either action.

The results probably won’t surprise march supporters (including AAAS, which publishes ScienceInsider) and those who have followed the effort. But social scientists who do surveys for a living say the data don’t pass the smell test.

“They are flawed at the most basic level,” says Michael Heaney, a political scientist at the University of Michigan in Ann Arbor. “If a student in an introductory statistics class had asked me if they could do this, my answer would have been ‘no.’”

Coming up short

The results suffer from two fundamental problems, says Heaney, who deployed a team to conduct a randomized survey of participants at last year’s flagship march in Washington, D.C., and plans to do it again this weekend. The first is that the respondents were self-selected and, thus, not likely to be representative of the organization’s entire email list. Specifically, says Heaney, any characterization of the demographics, attitudes, and activities of supporters would be unfounded.

Nor are the respondents necessarily representative of those who marched. For example, Heaney’s data, drawn from a truly randomized sample attending the Washington, D.C., march, found that only about 15% of the crowd had been mobilized by the March for Science movement. The vast majority said they attended because of an affiliation with other organizations, or through any number of other routes. That suggests the people on March for Science’s mailing list are not a good proxy for those who took to the streets.

Given those fundamental errors, Heaney says, the GMU survey’s 10% response rate—reasonable by survey industry standards—is a secondary issue. Even a 100% response rate, he notes, would have meant only that all the movement’s supporters had weighed in, not that everyone who marched had been counted.

“What they have produced is unbridled advocacy,” says Heaney, who studies the nature of political protest movements. “They aren’t alone—it’s something that thousands of interest groups do every day. It’s cheap and easy. But it’s not science.”

Fueling the movement

John Cook, a research assistant professor at GMU and co-author of the survey, readily admits that the study is not suitable for a peer-reviewed journal because of how the data were collected. But he believes that “asking people what motivates them to march and what they hope to achieve” is worthwhile to the March for Science movement.

Heaney also sees value in that. “Any polling that an organization does of its followers is useful,” he says. “Most advocacy groups don’t bother to do that, so good for them. And don’t get me wrong, I wholeheartedly support the goals of the March for Science. I just don’t want to see their results labeled as research.”

Caroline Weinberg, interim executive director of March for Science, agrees that the GMU survey provides “valuable information about our supporters” and says that it was never intended to pass muster with peer reviewers. “The fact that the data have limitations doesn’t mean that you disregard the data,” says Weinberg, who is based in New York City. She agrees that it would be “bad science to say the survey represents all of our supporters and everyone who marched. But it’s also bad science to simply ignore the data.”

In preparing for last year’s march, organizers had posted a 42-question survey that supporters could fill out at the same time they RSVPed for a march. But the steady rain during the flagship march in Washington, D.C., prevented organizers from collecting any information from those who actually showed up.

Weinberg says there were some differences between the GMU survey and her organization’s RSVP survey. “About 80% of those who planned to march told us it would their first march,” she notes, whereas the GMU survey found that 77% of respondents were veterans of previous marches. In addition, she says the RSVP survey respondents were much less likely to identify themselves as scientists than were those who answered the GMU survey.  

March for Science organizers will be circulating questionnaires this weekend at both the Washington, D.C., event and many of the satellite marches. But once again the goal is not a scientifically valid survey, says Kristen Gunther, a March for Science staff member based in Lander, Wyoming. “We want to package the information and take it to elected officials with the message, ‘Here are the things our supporters are most concerned about,’” she explains. Participants will also be asked to rank how well they think those officials are dealing with the issues on the list.

“We understand that we will be getting [responses from] the most motivated people,” Gunther says. “But we want to hear from these people. If our goal is to influence policymakers, they are the ones who are willing to write letters, show up at town halls, and engage in other advocacy efforts.”

Gunter’s assumption is borne out in a comparison of the GMU findings with those drawn from scientifically rigorous surveys at the 2017 march by Heaney and Dana Fisher, a sociologist at the University of Maryland in College Park. For example, GMU found that 94% of respondents had donated to a political organization, compared with 78% of Heaney’s sample. GMU also found 70% of the respondents had interacted with the media—e.g. writing a letter to the editor, being interviewed, or calling into a radio talk show. In contrast, Fisher found that just 15% of marchers reported similar media exposure. Fisher, who is studying the nature of large-scale marches since Trump’s inauguration, has posted her preliminary findings on a blog, American Resistance.

Treading carefully

One issue that dogged the first March for Science was the extent to which it would be seen as a polarizing protest against the policies of the Trump administration and the Republican Congress rather than as a rally in support of science. Organizers were so concerned about being labeled partisan that they urged the GMU researchers not to ask about political ideology or affiliation. The scientists complied, leaving a hole in what is otherwise a deep dive into the level of political activism among supporters.

In contrast, Heaney was under no constraints. In a paper published last week in the social science journal Contexts, Heaney compares the politics of participants in 10 marches held last year in Washington, D.C. The March for Science was clearly on the liberal end of the political spectrum, he finds, nearly matching the partisan flavor of the Women’s March and the Tax March and exceeding the liberal leanings of those in the People’s Climate March.

That’s hardly surprising, Heaney says. “There is little doubt that protesters are highly involved in other forms of partisan politics,” he writes in the paper, archly titled “Making protest great again.” Heaney notes that “Trump has taken a period that would have seen protests regardless of the winner of the 2016 presidential election and helped to turn it into a time of nearly continuous grassroots resistance.”

Source: Science Mag