Press "Enter" to skip to content

As U.S. election nears, researchers are following the trail of fake news

SÉBASTIEN THIBAULT

It started with a tweet from a conservative media personality, accompanied by photos, claiming that more than 1000 mail-in ballots had been discovered in a dumpster in Sonoma county in California. Within hours on the morning of 25 September, a popular far-right news website ran the photos with an “exclusive” story suggesting thousands of uncounted ballots had been dumped by the county and workers had tried to cover it up.

In fact, according to Sonoma county officals, the photos showed empty envelopes from the 2018 election that had been gathered for recycling. Ballots for this year’s general election had not yet been mailed. Even so, within a single day, more than 25,000 Twitter users had shared a version of the false ballot-dumping story, including Donald Trump Jr., who has 5.7 million followers.

This election season, understanding how misinformation—and intentionally propagated disinformation—spreads has become a major goal of some social scientists. They are using a variety of approaches, including ethnographic research and quantitative analyses of internet-based social networks, to investigate where election disinformation originates, who spreads it, and how many people see it. Some are helping media firms figure out ways to block it, while others are probing how it might influence voting patterns.

The stakes are high, researchers say. “This narrative that you’re not going to be able to trust the election results is really problematic,” says Kate Starbird, a crisis informatics researcher at the University of Washington’s Center for an Informed Public. “If you can’t trust your elections, then I’m not sure democracy can work.”

In 2016, Russian operatives played a major role in spreading disinformation on social media in an attempt to sow discord and influence the U.S. presidential election. Foreign actors continue to interfere. But researchers say the bulk of disinformation about this year’s election has originated with right-wing domestic groups, attempting to create doubt about the integrity of the election in general, and about mail-in voting in particular. An analysis by the Election Integrity Partnership (EIP), a multi-institution collaboration, showed that the false story about the Sonoma ballots was spread largely by U.S.-based websites and individuals with large, densely interconnected social media networks. “They’re just sort of wired to spread these misleading narratives,” says Starbird, who is an EIP collaborator.

Much of the election disinformation EIP has tracked so far originates in conspiratorial corners of the right-wing media ecosystem. “What we’re seeing right now are essentially seeds being planted, dozens of seeds each day, of false stories,” says Emerson Brookings, a resident fellow at the Atlantic Council’s Digital Forensic Research Lab, which is part of EIP. “They’re all being planted such that they could be cited and reactivated … after the election” by groups attempting to delegitimize the result by claiming the vote was unfair or manipulated.

So far, most of the disinformation EIP has documented focuses on election integrity. But as Election Day draws near, Starbird and Brookings expect to see more attempts to create confusion about voting procedures and attempts to suppress turnout—by raising fears about violence at polling places, for example.

Election deception can take various forms on social media. Joan Donovan, research director of Harvard University’s Shorenstein Center on Media, Politics and Public Policy, has been doing digital detective work on Facebook groups targeting Latinos with pro–President Donald Trump messages that appear to be run by non-Latinos who have assumed fake identities. These groups coordinate their campaigns and recruit participants on public message boards or chat apps, allowing researchers to observe their operations; the postings also provide clues the researchers can follow to investigate who the members are and what motivates them.

Purveyors of disinformation have become expert at exploiting the dynamic between social and mainstream media, researchers say. Right-wing conspiracy groups like QAnon—which promotes a false narrative that a cabal of cannibalistic, Satan-worshiping pedophiles are trying to bring down Trump—have learned how to create content and “trade up the chain” of social media users and hyperpartisan websites with increasingly large followings, Donovan says. When the falsehoods start to get traction, mainstream media outlets often feel compelled to debunk them, which can end up further extending the story’s reach. Several stories that had been circulating in QAnon networks got mainstream coverage around the time of the first presidential debate, for example, including unfounded claims that former Vice President Joe Biden might take performance-enhancing drugs or cheat by wearing an earpiece during the debate. “What we’re seeing is that the ways in which news media traditionally operate is now being turned into a vulnerability,” Donovan says.

Not all election disinformation is coming from the bottom up, however. Yochai Benkler, co-director of the Berkman Klein Center for Internet and Society at Harvard, and colleagues recently examined how claims of potential fraud associated with mail-in ballots entered public discourse. The researchers analyzed more than 55,000 online news stories, 5 million tweets, and 75,000 posts on public Facebook pages between March and August. They found that most spikes in media coverage and social media activity on the topic were driven by Trump himself—either through his own hyperactive Twitter account, press briefings, or appearances on the Fox TV network. “Donald Trump has perfected the art of harnessing mass media to disseminate and reinforce his disinformation campaign,” the researchers write in a preprint posted earlier this month.

EIP is working with social media companies to help them refine and clarify their policies so they can react more quickly to disinformation. Several companies have taken recent steps to flag or remove content, or make it harder to share—steps experts say are welcome, if long overdue. (Some platforms are also trying to nudge users toward better habits, as with Twitter’s recent experiment with prompts that appear when someone tries to share a link to an article they haven’t opened, encouraging them to read it first before sharing.)

The impact of disinformation on the election won’t be easy to measure. Some clues, however, might come from a research collaboration with Facebook aimed at studying the platform’s impact on this year’s election. The company has given 17 academic researchers access to data on the Facebook activity of a large number of users who’ve consented to be involved. (Facebook expects between 200,000 and 400,000 users to volunteer.) Participants agree to answer surveys and, in some cases, go off Facebook for a period of time before the election to help researchers investigate the effects Facebook use on political attitudes and behavior.

Among other things, the Facebook users will be asked at different times to rate their confidence in government, the police, large corporations, and the scientific community. “We’re able to look at things like changes in attitudes and whether people participated in the election and link it to their experiences on Facebook and Instagram,” including exposure to election disinformation, says Joshua Tucker, one of the project’s coordinators and a professor of politics and co-director of New York University’s Center for Social Media and Politics.

Some evidence suggests the impacts might not be as great as feared, says Deen Freelon, a political communication researcher at the University of North Carolina, Chapel Hill. There’s a long history of research, for example, showing that political ads only have marginal influence on voters. And more recent studies have suggested misinformation did not have a major effect on the 2016 election. A study published in Science in 2019 found that 80% of exposure to fake news was concentrated within just 1% of Twitter users. A survey study reported in the Proceedings of the National Academy of Sciences (PNAS) found no evidence that that people who engaged with Russian troll accounts on Twitter exhibited any substantial changes in political attitudes or behavior.

Freelon, who was a co-author on the PNAS paper and is also a member of the Facebook collaboration, says he’s more worried about “second order effects” of disinformation on our culture, such as the general sense of paranoia and distrust it creates. “When people look at social media and can’t figure out what’s true and what’s not, it degrades the overall informational quality of our political conversations,” he says. “It inserts doubt into a process that really shouldn’t have any.”

Source: Science Mag