It’s undeniable that AI will be used to produce disinformation ahead of upcoming elections around the world. What’s unknown is what impact this AI-generated disinformation will have. Among other concerns, will candidates and their teams be transparent with voters about using AI?
Our team at Factchequeado, for example, recently debunked a campaign video that used AI-generated images, produced by the campaign team of Ron DeSantis, the Florida governor running for president in the U.S. The video shows a collage of six images of former president Donald Trump – against whom DeSantis is competing for the Republican nomination – with former National Institute of Allergy and Infectious Diseases Director Anthony Fauci. Three of the images were generated with AI; they show Trump hugging and kissing Fauci, an unpopular figure among Republican voters.
The video does not make clear to voters that AI was used for these images, which are mixed in with other authentic visuals.
Do we all agree that this is deceiving to the public? Would anyone argue otherwise?
In April 2023, the Republican party (GOP) used AI to create a video attacking President Joe Biden after he announced he would seek re-election. In this case, the GOP account included a written clarification that the content had been created with AI.
In contrast, the DeSantis campaign video lacked transparency. Nowhere does it warn viewers that AI was used to create some of the images in the video. Our team at Factchequeado was unable to determine what AI software was used to create the images.
In this article we published at Factchequeado, we explain how you can determine if an image was created with AI. In general, it’s helpful to look for the source of the image in question and find out if it was published previously. It’s also important to pay special attention to small details, for instance analyzing a person’s hands or eyes, in search of imperfections or other traces of AI.
Automatic detection tools can also be used to determine if an image was created using AI. However, like many tools, they aren’t perfect and there’s room for error.
It's not new knowledge that the formats used to spread disinformation are dynamic, and that disinformers evolve and improve their techniques more quickly than those who counteract them. This has happened in all elections in recent years: we have seen manipulated photos, slowed-down videos, “cheapfakes” with manipulated images (called this because deepfakes were more expensive in the past, and weren’t needed to create these images), and falsified WhatsApp audios.
Election after election, disinformers surprise journalists, community members and election officials with novel techniques. The “best case scenario,” which still isn’t ideal, is to prepare people to respond to disinformation using methods that were effective during previous election cycles.
More than a year ahead of the 2024 U.S. presidential elections, experts are warning about the ways in which AI could threaten or undermine their legitimacy. Some have also offered recommendations about how to protect elections.
The Brennan Center for Justice published an analysis titled, “How AI Puts Elections at Risk — And the Needed Safeguards” in which Mekela Panditharatne, a disinformation expert at the Brennan Center, and Noah Giansiracusa, associate professor of mathematics and data science at Bentley University, describe the dangers posed by AI-generated election-related disinformation.
They call on the federal government, AI developers, social media companies and other agencies to step up their efforts to protect democracy.
They suggest the following actions be taken:
- Refine AI filters to eliminate election falsehoods and impose limitations that make large-scale disinformation campaigns more difficult.
- Have the federal government create tools to counter both ultra-fakes (or deepfakes) and electronic fraud (or phishing). Share these tools with state and local election authorities to help them identify AI-generated content.
- Designate a lead agency in the executive branch to coordinate the governance of AI issues in elections.
If you are a reporter, editor or owner of a media outlet that serves Latinos in the U.S., contact Factchequeado and join our community.
Main image courtesy of Factchequeado.
This article was originally published by IJNet in Spanish. It was translated to English by journalist Natalie Van Hoozer.