Fact-checkers propose solutions to mis- and disinformation on YouTube

Jan 20, 2022 in Combating Mis- and Disinformation
Screenshot of YouTube

False and purposely misleading content runs rampant on the internet. From conspiracy theories to rumors presented as fact, mis- and disinformation seem to be everywhere — especially on social media. 

This has been notably relevant during the pandemic. Those who use social media platforms are more likely to say they have been exposed to misinformation about COVID-19 than non-users, according to the latest Digital News Report.

At the center of this controversy, Facebook, Twitter and YouTube have taken some actions to address the issue, such as labeling and banning some content. Yet, mis- and disinformation continues to spread, and critics say that more needs to be done. 

In a recent open letter to YouTube CEO Susan Wojcicki, the International Fact-Checking Network proposed a series of solutions to address the issue. “As an international network of fact-checking organizations, we monitor how lies spread online — and every day, we see that YouTube is one of the major conduits of online disinformation and misinformation worldwide,” reads the letter signed by more than 80 organizations from different countries, adding that YouTube is “allowing its platform to be weaponized by unscrupulous actors to manipulate and exploit others, and to organize and fundraise themselves.”

Hate speech in Brazil, unsubstantiated accusations of election fraud in the U.S. and Taiwan, and content that falsely denies human right abuses in the Philippines during its period of martial law are some of the examples the letter cites. It further notes that: “From the eve of the U.S. presidential election to the day after, YouTube videos supporting the ‘fraud’ narrative were watched more than 33 million times.” 

Last year alone, “millions of other users were watching videos in Greek and Arabic that encouraged them to boycott vaccinations or treat their COVID-19 infections with bogus cures,” the fact-checkers’ letter explains.

”[Fact-checking organizations] want, first and foremost, to open up a transparent dialogue and effective collaboration with YouTube,” Natália Leal, CEO of Brazil’s Agência Lupa, told IJNet. “We suggest that the platform be more transparent about its operations, that it talk to fact-checkers and support their work, that it punish repeat misinformers and give us an insight into it.”

In response to the letter, YouTube spokesperson Ivy Choi said in a statement that the company collaborates with “hundreds of publishers around the world,” and that it has launched fact-check panels in countries like the U.S., India, Brazil, Germany, Indonesia and the U.K. 

The company’s hate and harassment policies prohibit “conspiracy theory content that is used to justify real-world violence.” “Hate speech is not allowed on YouTube, and we remove content promoting violence or hatred against individuals based on protected attributes,” said Choi.

YouTube also noted the low consumption of “recommended borderline misinformation” on the platform.“Only about 0.21% of all views are of violative content,” which it later removes, according to Choi. 

This hasn’t been enough to rein in mis- and disinformation on the platform, however. “Dis/misinformation is part of what YouTube calls ‘borderline information,’ but this concept is broader,” said Leal. “That's why we advocate for programs and actions focused on dis/misinformation with open dialogue with fact-checkers and researchers, and providing context rather than a systematic removal of content.”

Noting that current measures are “proving insufficient,” fact-checkers included a list of proposed solutions that they believe would help reduce the spread of mis- and disinformation on YouTube.

Here are their suggestions:

  • “A commitment to meaningful transparency about disinformation on the platform: YouTube should support independent research about the origins of the different misinformation campaigns, their reach and impact, and the most effective ways to debunk false information. It should also publish its full moderation policy regarding disinformation and misinformation, including the use of artificial intelligence and which data powers it.”

  • “Beyond removing content for legal compliance, YouTube’s focus should be on providing context and offering debunks, clearly superimposed on videos or as additional video content. That only can come from entering into a meaningful and structured collaboration taking the responsibility and systematically investing in independent fact-checking efforts around the world that are working to solve these issues.”

  • “Acting against repeat offenders that produce content that is constantly flagged as disinformation and misinformation, particularly those monetizing that content on and outside the platform, notably by preventing its recommendation algorithms from promoting content from such sources of misinformation.”

  • “Extend current and future efforts against disinformation and misinformation in languages different from English, and provide country- and language-specific data, as well as transcription services that work in any language.”

YouTube also said that it appreciates this feedback and believes that there’s more nuance to take into account. The company agrees, for instance, that connecting viewers to high-quality information through fact-checking, information panels and ranking authoritative content “is the best approach.”

There is some cause for optimism, according to Leal. “We believe that soon we will be able to talk about actions to be taken that, in fact, would be effective and contribute to the fight against dis/misinformation on YouTube,” she said.


Photo by Christian Wiediger on Unsplash.

Fabiana Santos contributed to this article.