Since ChatGPT was released to the public in November 2022, generative artificial intelligence (AI) has dominated the public’s attention. The rapidly evolving technology is already impacting industries such as education, travel and government. The media won’t be spared.
Generative AI changes the way content is both made and consumed, leading to new challenges in the fight against disinformation.
In a recent ICFJ Disarming Disinformation: Investigative master class, Aimee Rinehart, a senior product manager of AI strategy for the AP, shared insights about generative AI and how it affects disinformation.
Here is what journalists need to know today:
Rinehart suggested three ways journalists can determine whether an image may have been generated using AI.
1) Look for peculiar details on people, like distorted hands, an extra leg, or mismatched jewelry.
2) Check for distortions or blurriness in the backgrounds of images, and examine shadows or lack thereof for legitimacy.
3) Find other camera angles or pictures to confirm details in an image.
Rinehart also shared the SIFT Method for fact-checking: Stop, Investigate the Source, Find Better Coverage, Trace Claims, Quotes, and Media to their Original Context. The method was developed by Mike Caulfield, a research scientist at the University of Washington’s Center for an Informed Public.
“We’re entering an era of new technology, but it is really a good idea to get back to basics so that you can identify [the source of content],” said Rinehart.
Before sharing a questionable or controversial image with others, think carefully about its validity. Looking for strange details, or checking for corroborating angles and images can help determine legitimacy and prevent the spread of false information.
Public harm and impact
This year, major social media platforms rolled back their efforts to combat disinformation. As a result, going forward there could be less oversight around protective measures online. This may increase the public’s vulnerability to misinformation.
In the U.S., Republican political leaders are also seeking to curb federal spending on research into false information online. The cutbacks in funding and resources will likely impact our understanding of how the public is affected by the spread of mis- and disinformation.
“It’s caused a real freezing effect of funds toward mis- and disinformation research. We will likely know less about [the 2024] election because of that and we won’t really understand where, if any, harms came in,” said Rinehart.
Still, it’s not a foregone conclusion that generative AI will intensify public consumption of misinformation. In one recent study, researchers found that larger amounts of AI-generated false content does not necessarily mean that people consume more of it.
“If people want problematic content, they will get it. It’s not that they are going to be flooded and have no way else to look at nothing, no alternative but to look at the problematic content and share it,” Rinehart said.
Upgrades in AI technology are enhancing the quality of false content, for instance with more lifelike details.
“DALL-E3 and probably ChatGPT4 and other advances are going to take better guesses at what you meant to say in your prompt,” Rinehart said.
Researchers and software developers alike have shown that upgrading and improving generative AI is both affordable and feasible. A study from WIRED showed how developers were able to manipulate software to generate anti-Russian content for only $400. A data scientist from New Zealand was able to make a right-leaning chatbot for less than $300.
“You can expect to see likely other people using this to their own advantage, whether they’re running for office or running a business and have a lot to gain by shifting the narrative,” Rinehart said.
Some companies are pioneering efforts to trace AI-generated images, though with varying success to date. Watermarks, for instance, can be easily evaded or manipulated. OpenAI, meanwhile, is developing a machine learning tool to identify images created by DALL-E, which is expected to have 99% accuracy.
Implications for elections
Generative AI will be influential as many countries hold elections in 2024. In the U.S., generative AI is already being used in election attack ads, audio, videos and images.
The campaign team for Republican presidential candidate, Ron DeSantis, generated false images of former president Donald Trump and former chief White House medical adviser, Anthony Fauci, embracing. In a separate ad, a PAC group for DeSantis used AI to generate Trump’s voice in a video attacking Republican Governor of Iowa, Kim Reynolds.
In countries where people speak many languages, generative AI can help political campaigns reach diverse audiences they otherwise wouldn’t be able to. In India, for instance, generative AI was employed in a humorous viral song about current prime minister, Narendra Modi, in several different languages.
“[ChatGPT is] on people’s radar. I think if you’re a fact-checker, you’ve already seen the ramifications, especially in the U.S,” said Rinehart. “As a fact-checker in the U.S., you’re probably already familiar that your job is going to be harder in the next two to three years. If ever there was a time to share resources, tactics, and techniques, now is the time.”
Disarming Disinformation is run by ICFJ with lead funding from the Scripps Howard Foundation, an affiliated organization with the Scripps Howard Fund, which supports The E.W. Scripps Company’s charitable efforts. The three-year project will empower journalists and journalism students to fight disinformation in the news media.