Fact-checking at Meta: Chronicle of a death foretold

Автор Ana Prieto
Jan 16, 2025 в Combating Mis- and Disinformation
Meta logo on a phone in front of a white background that says "Meta"

Meta CEO Mark Zuckerberg’s announcement on January 7 regarding the conclusion of Meta’s fact-checking efforts in the U.S. and his intention to “get rid” of fact-checkers subjects journalists to the same treatment they typically receive from disinformers. 

Omitting the fact that it was the company he leads that launched the Third-Party Fact-Checking Program to save its reputation following the Cambridge Analytica scandal and allegations of Russian interference through Facebook in the 2016 U.S. election, a carefully concerned Zuckerberg described those who have worked to clean up his platforms over the past nine years as engaging in censorship, and he blamed them for destroying “more trust than they’ve created.”

For those of us who have worked as fact-checkers under partnerships with Meta, the move is not entirely surprising. In August 2024, Zuckerberg expressed regret in a letter to Rep. Jim Jordan, chairman of the U.S. House Judiciary Committee, over having given in to pressure from the Biden Administration during the pandemic to “censor” viral content on Facebook and Instagram. He did not specify what content he was referring to, but those of us who immersed ourselves daily into the most toxic corners of his social networks know that our fact checks were limited to curbing the spread of videos promoting the use of bleach to cure COVID-19, or discouraging getting the vaccine by claiming it could cause infertility or genetic mutations.

There were more signs, too. During the uncertain, rumor-filled months of 2020, fact-checkers had no trouble finding deceitful and potentially harmful content designed to proliferate. The company provided us with effective internal search tools that allowed us to track the evolution of posts created to mislead and manipulate. We verified selected content using at least two expert sources and a series of guidelines established by the International Fact-Checking Network.

By 2022, these tools were only a shell of what they once were. What in the past had helped us track down disinformation — often laced with hate speech — now turned up little more than promotions for hair loss products and dietary supplements. The platform appeared to be deliberately weakening the tools that helped us perform the fact-checking service it had commissioned, and paid for. 

However, we still had CrowdTangle: an outstanding external monitoring tool used not only by fact-checkers but also by media outlets, activists, and researchers who relied on it for social listening. Facebook, which acquired the tool in 2016, decided to shut it down in August 2024.

The end of intentions

What Zuckerberg described as “censorship” are the efforts of its verification partners (such as PolitiFact, Chequeado, AFP, Aos Fatos, Reuters, Correctiv, Maldita.es, Africa Check, AP and a long et cetera) to explain that HIV does exist, that climate change is a fact, and that the photo of Vladimir Putin kneeling before Xi Jingping was created with artificial intelligence. 

These partners do not remove content from Meta's platforms; instead, they add warnings to posts indicating that they contain false or misleading information, an action that immediately limits their ability to go viral. What happens next is up to the tech giant: it can suspend the misinformer from posting privileges for a few days, permanently delete their account, or take no action at all.

The Third Party Fact-Checking Program is not perfect. False information often is shared innocently. Some is harmless, and not all of it deserves the same treatment as false content deliberately designed to mislead, manipulate, and generate clicks and revenue. Additionally, Meta's appeal system, which allows users to challenge a “false” or “partially false” rating, is complicated and unintuitive, often leaving those who wish to file a claim feeling powerless.

And then there’s the issue with politicians: Meta does not allow the direct verification of statements made by political figures, meaning their posts on Facebook, Instagram, or Threads — which often gain massive traction — can’t receive warning labels, even if they make ludicrous and polarizing false claims, such as that the minimum wage in Argentina is $1,100/month (it’s actually under $300/month), or that certain countries “empty their jails and mental institutions” in the U.S., not to mention more blatant permissions, such as the carte blanche that Facebook gave former Philippine President Rodrigo Duterte to justify with a torrent of lies his deadly anti-drug campaign.

Despite its shortcomings and limitations, Meta's fact-checking program was the only barrier against disinformation on its platforms. It showed some intention on the company's part to acknowledge and address the problem. 

Now, instead, Meta seems to acknowledge disinformation as part of its business model, on the doorstep of a U.S. administration that appears to view it as a (fundamental) part of its political strategy. The company is following in the footsteps of Elon Musk's X in promoting community notes as a mechanism supposedly to “preserve” free speech. The move complements another Zuckerberg announcement aimed at appeasing more radical conservatives: the removal of “restrictions on topics like immigration and gender,” which is likely to fuel an intensification of the hate speech that already thrives on its platforms.

Meta’s new measures currently focus on fact-checking operations in the U.S., but there is little doubt they will extend further. This scenario, in which journalism is once again the scapegoat for the alleged restriction of public freedoms, presents an opportunity for the fact-checking profession to re-evaluate its dependence on Big Tech, which increasingly uses the argument of free expression to serve its own interests, and will do anything to build on its already immense political power.


Photo by Julio Lopez via Pexels.