Understanding deepfakes and how to counter them

Mar 17, 2023 en Combating Mis- and Disinformation
Robot in front of a red background

From Snapchat face swap filters to U.S. President Joe Biden singing Baby Shark, manipulated media has proliferated in recent years. Deepfakes and other forms of synthetic and AI-assisted media manipulation are on the rise, and journalists tasked with sorting fact from fiction are forced to keep pace.

In a recent ICFJ Empowering the Truth Global Summit session, ICFJ spoke with shirin anlen, a media technologist at WITNESS. WITNESS, a nonprofit that helps people use video and technology in support of human rights, runs a project called “Prepare, Don’t Panic,” focused on countering the malicious use of deepfakes and synthetic media, among other initiatives.



anlen offered tips for how journalists can better understand the threats posed by manipulated media, and what can be done to counter them. Here are some key takeaways from the session.

Technologies and their threats

Rapidly evolving technologies are allowing users to edit objects and facial features, animate portraits, transfer motion, clone voices, and more. As part of this ecosystem, deepfakes are a type of audio-visual manipulation that allows users to create realistic simulations of people's faces, voices and actions. 

Deepfakes produced today are used in alarming ways. They have promoted gender-based violence, for instance, through the nonconsensual posting of sexual images and videos using a person’s likeness. Falsified videos of public figures have also been disseminated, as have fabricated audio clips. Deepfakes also benefit from the “Liar’s Dividend,” which places extra burden on journalists and fact-checkers to prove a piece of media’s authenticity, or lack thereof. 

They are the most widely discussed form of manipulated media, anlen noted: “Deepfake itself is part of a larger generating landscape that we keep seeing more and more in the news.”

Although deepfakes are becoming more prevalent, they aren’t as popular as one might be led to believe. They require a significant amount of skill and knowledge in order to execute properly, making them difficult for the average person to create. A lot of manipulated media, as a result, doesn’t quite rise to the level of a true deepfake. 

Filters that change a person's hair,eye color or voice, for instance, are lesser manipulations called “shallow fakes” that people may come across on a daily basis, especially on social media. AI-generated voice clips of made-up quotes from public figures are another example of shallow fakes. 

“It's not really been used on a large scale,” anlen said about deepfakes. “Most of what we are still seeing in the media misinformation and disinformation landscape [are] shallow fakes, which are mostly contextual recycled materials.”


Every new technology has an Achilles’ heel and deepfakes are no exception. Users can detect errors in appearance, for instance: generated images may have static in the background, teeth out of alignment, or the AI-generated human in a video might not speak without the words properly matching mouth movements.

The technology adapts rapidly, however. “Research came out and said, ‘deepfakes don't blink, so it will be really easy to detect because it just doesn't blink,’ and then two weeks later, deepfakes started to blink,” said anlen. 

In a virtual cat-and-mouse game with manipulated media that has become higher quality and easier to access, detection efforts struggle to keep up.

“The first generation of fake faces all had eyes in the center. They were always in the center, so that's what the detection was looking for,” said anlen. “But now we have so many different variations of people that are being generated with different lighting, different expressions – and the eyes are not in the center anymore.”

There are also gaps in who has access to quality detection tools, anlen explained. While there are websites that anyone can access, these tools tend to be less effective. Only a select few have access to accurate detection tools available to deepfake experts. 


Among methods to spot deepfakes, journalists can review video content for glitches and distortions, apply existing verification and forensic techniques, and use AI-based approaches to spot deepfakes when available. 

An increase in media literacy tools and more training on manipulated media for journalists are also essential. 

“We need to prepare for it and we need to see it,” said anlen. “We need to understand the landscape in order to really shape the technology, shape how it's supposed to be built, how it's supposed to be regulated and be part of it – and not just be affected by it.”

Disarming Disinformation is run by ICFJ with lead funding from the Scripps Howard Foundation, an affiliated organization with the Scripps Howard Fund, which supports The E.W. Scripps Company’s charitable efforts. The three-year project will empower journalists and journalism students to fight disinformation in the news media.

Photo by rishi on Unsplash.