How to report on AI in elections

byAmarah EnnisSep 17, 2024 in Combating Mis- and Disinformation
Sticker reading I vote

From image generators to chatbots, artificial intelligence (AI) is everywhere today; some AI services boast hundreds of millions of users. With one pivotal election just behind us in Argentina, and many more approaching all around the world in 2024, people are worried about AI’s ability to spread electoral misinformation.

While some of these concerns may be exaggerated, it’s still vital that journalists understand how the growing field of AI could affect the political sphere. 

In a recent ICFJ Disarming Disinformation: Investigative masterclass, AP global investigative journalist Garance Burke discussed how to report on AI’s interactions with politics effectively.

Get back to basics

Learning basic terms is a good place to start when researching AI and its impacts. Here are some must-know definitions to remember:

  • Artificial intelligence, or AI, is the catch-all phrase for computer systems, softwares or processes that mimic aspects of human cognition. No AI is on par with a real human brain – yet.
  • An algorithm is a sequence of instructions for how to solve a problem or perform a task. Algorithms are the basis for speech and facial recognition softwares, as well as predictive tools, which analyze past data to forecast future events or patterns.
  • Large language models, or LLMs, are AI systems that use statistics to uncover patterns in written data, which they then use to generate responses to queries. ChatGPT is an example of an LLM; essentially, these systems examine hoards of data from the internet to predict how words can be strung together in a sentence, and then imitate what they find.

When interviewing subjects on AI systems, it’s important to ask basic questions, too. How do these systems work? Where are they being used? What do they do and how well are they doing it? 

“It’s important to demystify the ways in which these tools work and help bring your audience in so that they can understand as well,” said Burke.

During elections, retain this mindset of simplicity when speaking to election monitors and campaign managers. Ask yourself, said Burke: “If I were a campaign manager, what would I want to predict? What data would be useful to me?”

Research resources

In a tech-heavy subject like AI, outside resources can help journalists understand and communicate the true impact of these tools on elections. The AP Stylebook (which can be accessed through a subscription) recently added an entire chapter on AI reporting. The Aspen Institute also has AI primers for journalists.

If you live in a country where AI technology is developing rapidly, it may be easy to find tech gurus and AI experts. If you don’t, Burke recommended speaking to academics who study AI, as well as NGOs concerned with privacy, digital rights, and tracking AI models. Some examples include Privacy InternationalHuman Rights WatchAmnesty International, and the Electronic Frontier Foundation.

Regardless of where you live, once you start investigating AI, it’s vital to seek out experts who can provide explanations and insights. “[AI is] one of those realms where engineers feel like journalists never really understand,” said Burke. “You’ll always get plenty of people who will feel the need to tell you how this works from the beginning, and that’s a good thing, because that will build your own confidence and understanding.”

Think regionally

In some countries, AI will be used to spread information, both credible and false, among the public during election seasons. In others, AI will not be nearly as important to the outcome of an election. Consider where you’re reporting and how likely AI use is in that location – and how effective it will be among voters there.

“I followed a campaign in the Isthmus of Tehuantepec [in Mexico], and that election was won by handing out t-shirts and bags of beans to people who really needed basic economic support,” said Burke. “So I don’t think every election is going to turn to the use of chatbots to spread disinformation.”

Even if you live and work in a region where AI plays a minor or nonexistent role in spreading election misinformation, it’s still worth researching what kind of data is available to campaigns and interest groups. Learn how that data could be used to influence voters. 

For example, the AP found that in countries such as China, Israel, Australia and India, mass surveillance data originally collected to trace COVID-19 cases was being used in conjunction with AI tools to clamp down on protests, threaten civilians and harass marginalized communities.

Understand AI’s limits

AI can loom as a confusing, even scary, monolith for journalists and the public. 

“Get grounded in what these models can do or can’t do in order to think about how they can be used to spread electoral disinformation or deepen threats to the public trust,” said Burke.

It’s unlikely, for instance, that AI will spontaneously create and spread a new false narrative surrounding elections, à la the ballot drop box or zombie voter conspiracies that have spread in the U.S. However, a person could use AI tools to locate and target individuals who may be susceptible to misinformation. Once false content begins to circulate, an AI chatbot with data aggregated from the internet could regurgitate this information as fact.

“Please keep in mind these are tools built by humans,” said Burke. "Do not fall for the myth that computers can think on their own.”

Put people first

Centering communities and people, instead of code and processes, can make a story better resonate with readers. Burke used the AP’s coverage of a child welfare screening tool in Allegheny County as an example. The newsroom published two different stories on the tool, the first in April 2022 and the second in January 2023. 

Though the investigative team at the AP had plenty of information about the algorithms behind the tool, they chose to focus on the people affected by its use. The first story looked at a family attorney in the county and other employees in the child welfare system, whose work and decision-making were affected by the algorithm.

“She said, ‘Even if they designed a perfect tool, it doesn't really matter because it’s designed from very imperfect data systems,’” Burke said, referring to a family therapist who quit in 2020 due to frustrations with the Allegheny system.

The second story focused on a couple with disabilities, whose daughter was taken into foster care. They believed that their disabilities caused the child welfare screening tool to label them as a risk to their daughter’s safety.

By focusing on the people affected, readers were better able to empathize with them while learning about the technology’s impact. In this case, that was absolutely vital: according to the ACLU, 26 states and Washington, D.C. considered using similar AI tools to make child welfare decisions. The article was so effective that the Biden administration contacted the AP’s sources and incorporated their concerns into its AI Bill of Rights.

At the end of the day, Burke believes, it’s up to the readers to decide which way the double-edged sword of AI will swing. “Our job is not to say whether AI is good or bad, but to help the public understand how these systems work,” she said.


Disarming Disinformation is run by ICFJ with lead funding from the Scripps Howard Foundation, an affiliated organization with the Scripps Howard Fund, which supports The E.W. Scripps Company’s charitable efforts. The three-year project will empower journalists and journalism students to fight disinformation in the news media.

Photo by Mick Haupt on Unsplash.