Tackling deepfakes in journalism

Jun 7, 2022 in Combating Mis- and Disinformation
Magnifying glass and notepad

As manipulated media becomes more prevalent, journalists will need to be able to identify its many different forms, and educate their audiences about it.

Deepfakes are among the more prominent examples of manipulated media today, said Sam Gregory, program director for WITNESS, an organization that uses video and technology to defend human rights, during a recent two-part ICFJ Global Crisis Reporting Forum webinar.

 

 

“Most of our work is helping people to create trustworthy information. When we look at deepfakes, part of the solution is how we reinforce an ecosystem of trustworthiness,” explained Gregory.

In the face of a constant struggle between credible reporting and media intended to deceive, journalists and human rights defenders must be equipped with the most effective tools and techniques to combat the latest in misinformation tactics. 

Here’s what they need to know:

The technology

There are many types of deepfakes to be concerned about if you’re a journalist. Examples include removing items from or altering backgrounds of images and videos, manipulating facial expressions or people’s movements, and creating new faces altogether.

Deepfakes are a serious threat, even if the extent of the technology and its current impact are overblown at times. While some in the media have predicted their ability to affect political processes, for example, they haven’t yet significantly impacted recent elections.

“There is this rhetoric around deepfakes, which is this idea that they are going to destabilize all possible truth,” said Gregory. This doesn’t take into account how technology to detect such efforts is improving, even if some deepfakes remain difficult to spot. “Although we think about deepfakes as very hyper-realistic face swaps, it still is the hardest part of this spectrum to do, and requires the most resources,” he added.

Still, as technology improves the ability to create deepfakes easily and cheaply, especially on mobile devices, journalists need to remain vigilant of future threats.

Future threats

Advances in technology are making it simpler to produce deepfakes, with less training. Perpetrators are better able to manipulate audio clips, photos and videos, as well as combinations of multimedia content.

This is especially concerning for women journalists, as the most common form of manipulated media is falsified sexual images that can be used to silence them, noted Gregory. It’s an area in particular today where countermeasures fall short. “Detection is inadequate,” he said. ”It’s problematic and there aren’t a great range of solutions in the area.”

Deepfakes also force journalists to spend time and money to prove an image is not manipulated. As deepfakes become simpler to produce, this can become a significant burden for journalists, especially those with less access to resources. “We need look at how this contributes to existing challenges for journalists who are under-resourced.”

Today,  shallowfakes — mis-contextualized, repurposed media and edits intended to deceive — remain more prevalent than deepfakes. Fortunately these, too, still represent a low level of threat.

 

Combating deepfakes

The same tools and techniques used to create synthetic media can also be leveraged to detect them. Journalists should review glitches in videos, and apply existing verification and forensics techniques to spot manipulated media. They can also utilize encryption and emerging AI-based tactics like infrared detection.

As they engage in efforts to combat deepfakes, journalists should keep in mind ethical considerations, as well. Deepfakes can be used for satire, and to protect identities, for instance, leading to some hesitation around efforts to stem their use. 

In addition, journalists should ask themselves: 

  • How can we teach people to spot deepfakes? 
  • Do tools for detection exist, and who has access? 
  • How do we build on existing journalistic skills and coordination? 
  • Do tools for authentication exist, and who yet again may not have access?

Existing tools and strategies have their limitations, however. “These tools are just starting to be available, [and] they have the hardest time dealing with content [journalists] encounter,” Gregory said, adding that among the challenges today is more priority placed by organizations on profit rather than solutions,

Journalists, when they publish, should include evidence for their readers of how content is not false. They should also incorporate information about synthetic media in media literacy efforts for their audiences, Gregory recommended. One helpful approach to disseminate is SIFT: “Stop, Investigate, Find better coverage, and Trace claims, quotes and media to original context.”

Meanwhile today, organizations like Google, Adobe and The New York Times are developing tools to help journalists identify and counter deepfakes, and to ensure their work includes evidence of legitimacy. 

“It is important to center journalists as one of the key groups who need to really identify what they need in this landscape,” said Gregory. 


Photo by Mediamodifier on Unsplash.