New research suggests Twitter's strategy for fighting misinformation is ineffective

byLaura Hazard Owen
Oct 17 in Fact-checking and verification

Enough with the “whack-a-mole” claims that as soon as you ban one fake news site, another one pops up: A report released by Knight on Oct. 4 finds that most of the accounts spreading fake news on Twitter during the election are still active today — and that “these top fake and conspiracy news outlets on Twitter are largely stable,” because Twitter has not banned them.

The findings are consistent with recent research by Matthew Gentzkow, Hunt Allcott, and Chuan Yu that found that, while engagements with fake news on Facebook have decreased, shares of fake news on Twitter have increased since the election.

Knight worked with a firm called Graphika to analyze more than 10 million tweets from 700,000 Twitter accounts that linked to more than 600 fake and conspiracy news outlets, both during and after the 2016 U.S. presidential election. (They worked with the list of news sites at OpenSources, which is maintained by Merrimack’s Melissa Zimdars. The sets of outlets they looked at were classified as either “fake” or “conspiracy;” OpenSources also has a number of other categories, like “satire” and “extreme bias,” that weren’t included.) They found that “more than 80 percent of the disinformation accounts in our election maps are still active as this report goes to press.” Also:

Sixty-five percent of fake and conspiracy news links during the election period went to just the 10 largest sites, a statistic unchanged six months later. The top 50 fake news sites received 89 percent of links during the election and (coincidentally) 89 percent in the 30-day period five months later. Critically — and contrary to some previous reports — these top fake and conspiracy news outlets on Twitter are largely stable. Nine of the top 10 fake news sites during the month before the election were still in or near the top 10 six months later.

Twitter, Knight says, isn’t taking these accounts down.

Twitter has claimed repeatedly that it has cracked down on automated accounts that spread fake news and engage in “spammy behavior.” Yet of the 100 accounts that were most active in spreading fake news before the election — the large majority clearly engaged in “spammy behavior” that violates Twitter’s rules — more than 90 were still active as of spring 2018. Overall, 89 percent of accounts in our fake and conspiracy news map remained active as of mid-April 2018. The persistence of so many easily identified abusive accounts is difficult to square with any effective crackdown.

One other thing Knight’s report suggests: Banning works. Conspiracy site The Real Strategy, which was a proponent of Pizzagate among other hoaxes, was “the second-most linked fake news site on our election map.” Twitter (and Reddit) banned it, however, and “links to The Real Strategy largely disappeared in the postelection data” — after the ban, links to it fell by 99.8 percent. “The case of The Real Strategy suggests that concerted action can indeed be effective in drastically reducing links to fake and conspiracy news,” the researchers write, “providing that platforms like Twitter and Reddit are willing to act decisively.”

Del Harvey, Twitter’s Global VP of trust and safety, disputed Knight’s findings. “This study was built using our public API and therefore does not take into account any of the actions we take to remove automated or spammy content and accounts from being viewed by people on Twitter. We do this proactively and at scale, every single day,” she said in a statement. “Secondly, as a uniquely open service, Twitter is a vital source of real-time antidote to day-to-day falsehoods. We are proud of this use case and work diligently to ensure we are showing people context and a diverse range of perspectives as they engage in civic debate on our service.”

Twitter banned millions of suspected fake accounts over the summer — shortly after this report went to press — and so it’s possible that some of the accounts Knight found still active in the spring of 2018 are gone now. Twitter also announced this week that it’s “updating and expanding our rules to better reflect how we identify fake accounts, and what types of inauthentic activity violate our guidelines. We now may remove fake accounts engaged in a variety of emergent, malicious behaviors.”

The full report, which includes cool maps and graphic representations of the various groups spreading fake news on Twitter, is here.

This post originally appeared on NiemanLab. It was republished on IJNet with permission.

Main image CC-licensed by Unsplash via Marten Bjork.