What journalists should know about AI-generated disinformation

Oct 25, 2023 in Combating Mis- and Disinformation
Futuristic sphere

Fears of AI-generated disinformation are rampant. Journalists and experts warn that it can be used to deceive audiences, profoundly sway viewers and incite violence

Although these concerns are widely vocalized, the precise impact of AI-generated disinformation remains unclear. What is evident, however, is that we can expect a surge in disinformation due to AI.

AI and disinformation today

AI technologies, such as voice cloning software, large language models and text-to-image generators, have made it easier to create misleading content. On social media, in particular, it is not uncommon to come across false AI-generated videos, pictures and audio clips. 

In one recent example, clips of U.S. President Joe Biden were combined to create a derisive video portraying a fictional day in his life. The video voiceover sounds concerningly like President Biden himself, though the content of the speech is false. Users online have also manipulated images of Trump’s arrest, and edited pictures that depict Indian Prime Minister Narendra Modi and Italian Prime Minister Giorgia Meloni getting married.

Despite an increase in volume, a peer-reviewed paper published in Harvard Kennedy School’s Misinformation Review this month, argues that, with respect to false and misleading content, the “current concerns about the effects of generative AI are overblown.”

Even if we assume that the quantity of misleading generative AI content will increase, it would have “very little room to operate,” the Harvard paper argues. Although a significant amount of misinformation circulates online, it is consumed by only a small fraction, approximately 5%, of American and European news consumers. 

“It's the kind of thing that people can buy into, and it's an easy story to tell ourselves that automatically generated content is going to be a problem,” said Andrew Dudfield, head of product at Full Fact, a U.K.-based fact-checking organization. 

These arguments partly stem from the fact that existing modes of disinformation are already effective without using AI. They also emphasize that predictions that AI-generated content, due to its enhanced quality, will have heightened consequences, lack concrete evidence and remain speculative.

“I don’t think we’re there yet,” said Aimee Rinehart, senior product manager of AI strategy at the Associated Press. “It stands to reason that we are going to have some problems. But I don't know if the internet is caving in on information that's problematic yet.”

Why we fall for disinformation in the first place

Although media organizations are worried about realistic AI-generated content, “people tend to believe just a photoshopped image of a member or a politician with a fake phrase she never said,” said David Fernández Sancho, the CTO of Maldita.es, an independent fact-checking organization based in Spain.  

Arguably, the effectiveness of disinformation campaigns does not depend on the quality of the content, but people’s pre-existing beliefs that something may be true. 

A study authored by researchers at New York University attempting to understand how partisanship impacts the belief and dissemination of misinformation found that “people were more willing to believe and share news consistent with their political identity.” Another paper, titled Why We Fall for Fake News, found that acceptance of false information is a result of cognitive biases rather than gullibility. Partisanship or political orientations, it adds, has an impact on what people believe or reject in the news. 

People visit information platforms for various reasons, explained Chris Wiggins, associate professor of applied mathematics and systems biology at Columbia University. Among them, people want to reaffirm their pre-existing beliefs and connect with like-minded individuals, seeking social inclusion. 

“There is the need to be included and to feel this social connection to other people,” he said. “You see somebody [...] with whom you identify because you've read their content before – it's really like this feeling of inclusion, like I'm part of the set of people who believes this thing.”

Using AI to combat disinformation

Journalists, scholars, and policy makers can also harness AI to analyze and help combat the formidable volume of disinformation they confront, whether it’s AI-generated or not.

However, Dudfield cautioned against fully automated fact-checking at this stage, noting that the human mind is better equipped to find “context and caveats and nuances” in online content. “AI is very good at doing something else, which is bringing order to chaos,” he said. It excels at organizing unstructured information, spotting patterns and grouping similar data, which streamlines fact-checking. By creating a more manageable list of content that can then be verified by human fact-checkers, it can save considerable time compared to manual monitoring.

Full Fact, for instance, has developed a software called Full Fact AI, which helps fact-checkers identify content that can be fact-checked – for example, statements as opposed to opinions or presumptions – and determine who said these statements, where they said them and the topic of the content. This helps to group similar statements together, enabling fact-checkers to better identify what is most important to verify.

Spain’s Maldita.es uses AI to identify and categorize common narratives across different news sources, making it easier to trace the sources of misleading content. “If we see, for example, a lot of content in a short space of time that share(s) the same narrative, chances are it's something organized and it's not just organic content that gets created,” said Fernandez, adding that this can help identify coordinated disinformation campaigns. 

Maldita.es is also part of a consortium under the EU’s AI4TRUST project, which will develop a hybrid system “where machines cooperate with humans” to allow real-time monitoring of social platforms to identify potentially misleading content in various formats and languages for experts to review.

Opening the space of capabilities

It may still be too early to predict the exact implications of AI in the disinformation ecosystem. It is evident, however, that in addition to the cautionary tales, AI can be beneficial to journalists and fact-checkers combating the increase in false content. 

Kranzberg’s first law of technology — “technology is neither good, nor bad; nor is it neutral” – stands to reason. “Every technology opens up the space of capabilities for people,” said Wiggins. “Some people will use it for rights, justice and other people will use it for oppression and harm.”


Photo by Michael Dziedzic on Unsplash.