AI and media freedom: A double-edged sword

May 5, 2025 in Press Freedom
A robot hand and a human hand trying to touch each other with "AI" in the middle

Artificial intelligence is reshaping multiple industries, including journalism. In the week of World Press Freedom Day, it’s worth considering how the technology can impact media freedom. This picture is complex. On the one hand, AI, and generative AI, can be a powerful tool to support newsrooms, but it can also be weaponized against them. 

Globally, the state of press freedom was classified as a “difficult situation” in the latest RSF World Press Freedom Index, the first time this label has been used to categorize the overall state of media freedom worldwide. 

 

Embargoed map
Image courtesy of Reporters Without Borders.

 

Against this backdrop, the U.N. notes that “AI brings new risks.” “It can be used to spread false or misleading information,” they observe, and “increase online hate speech, and support new types of censorship.” 

At the same time, AI can be used to filter the viewpoints that audiences are exposed to, as well as monitor journalists and citizens. Furthermore, “there are growing worries that AI may make global media too similar,” the U.N. reflects, as AI-driven content may homogenize reporting, and potentially “push out smaller media outlets.”

Navigating these challenges is crucial for newsrooms, journalists, and media advocates worldwide, especially at a time when press freedom is under increasing threat. In doing this, AI can also be used to pushback against these challenges. Here’s a walkthrough of both sides of this dynamic. 

Four ways AI threatens media freedom

(1) Surveillance and targeting

AI technologies are increasingly being used to monitor and intimidate journalists. As the Journal of Democracy shares, “AI-driven bots and algorithms bombard activists, journalists, and opposition figures with harassment, trolling, and false information.” 

Governments and other actors deploy facial recognition systems, predictive analytics, and other AI surveillance tools to track reporters' movements, monitor online activities, and suppress dissent and critical reporting. 

These tactics are not new. As far back as 2021, at least 180 journalists around the world were known to have been targeted by Pegasus, a form of spyware. But each wave of AI enables this anti-journalistic aggression to intensify. 

The rise in AI-powered surveillance increases the risks faced by reporters and their sources, especially those covering investigative issues. This may have a chilling effect on the reporting of subjects such as corruption, human rights abuses, and organized crime, which journalists should be able to pursue without reprisal. 

(2) Deepfakes and disinformation

The spread of misinformation through AI-generated deepfakes is another area where concerns have been expressed for some time. These anxieties have only heightened as the technology that makes it possible to manipulate or fabricate videos, images, or audio, has improved. 

Although some of these earlier efforts were quickly debunked, they demonstrated how AI can be used to spread misinformation. The speed with which this is evolving, incorporating memes, fake audio, AI-faked celebrity endorsements and more, makes it harder than ever for journalists — and their audiences — to discern fact from fiction. 

For journalists, there’s a growing challenge of being able to verify these fakes. Spreading misinformation, however inadvertently, can undermine trust in journalism. Moreover, journalists can also fall victim to themselves being deepfaked and impersonated, making the need to safeguard their identities and reputations more important than ever.

(3) Automated censorship

Authoritarian regimes are increasingly deploying AI-driven moderation tools to suppress independent voices online. Algorithms can quickly detect and remove politically sensitive content. China’s “Great Firewall” is the most prominent example of this, but other nations are also following suit.

Freedom House observes that AI can enable this type of activity to happen “at a speed and scale that would be impossible for human censors or less sophisticated technical methods.” Moreover, this can be done in a manner that can be hard to detect, “minimizing public backlash and reducing the political cost to those in power.”

This can undermine independent journalism, and public discourse, making it increasingly difficult to share content that governments do not want to see online. 

(4) Economic pressures and job losses

Beyond direct action against journalists and newsrooms, AI is indirectly threatening media freedom by reshaping the economics of the news industry. Newsrooms, especially those already under financial strain, are increasingly using AI tools for automating or speeding up tasks like content generation, editing and distribution. In tandem, the growing capabilities of AI may lead to job losses, and the hollowing out of original reporting

Some audiences appear to be alive to this threat. A 2024 survey by the Pew Research Center found that 59% of Americans say AI will lead to fewer jobs for journalists in the next two decades, with views currently mixed on what this means for the type of news that is produced and people can access. 

If AI accelerates the shift away from original reporting toward mass-produced content, a long-term consequence will not just be fewer newsroom jobs, but a diminished public square, both of which are issues we should all be concerned about.

How AI can help protect and promote media freedom

Despite these very real threats, it’s not all doom and gloom. AI also holds promise as a tool that can potentially strengthen journalism and help defend media freedom. 

(1) Identifying and combating disinformation

Although journalists should be wary of an overreliance on AI tools, the technology can support efforts to detect deepfakes and misinformation, as well as improve wider fact-checking efforts

These tools may not be as effective for lesser spoken languages at present, but that may change as the technology evolves. Moreover, as AI’s influence in shaping, creating and disseminating disinformation continues, we will increasingly need to add AI to our journalist’s toolkit in order to counter it. 

Products like NewsGuard’s AI Safety Suite, “aided by proprietary machine-learning processes, identify false claims spreading online in real time, globally.” We can expect these types of AI services to become increasingly integrated into newsroom workflows. As the Indian journalist Karen Rebelo told Nieman Lab last year, given the growing complexity of our information ecosystems, “you can no longer solely rely on your human skills as a fact checker.”

(2) Protecting journalists' safety and identity

AI is also being used by some news outlets to enhance digital security. Last year in Venezuela, independent media outlets protected the identities of their journalists by creating a show using AI-generated anchors.

 

Image courtesy of Connectas.
Image courtesy of Connectas.

 

Meanwhile, at the International Journalism Festival in Perugia last month, an initiative called JESS (Journalist Expert Safety Support) was announced. The AI-powered tool will source security tips from different media organizations and NGOs, and distribute it to newsrooms that may not have access to this level of guidance. The tool is expected to roll out globally next year, with availability in multiple languages. 

These quick examples demonstrate how AI, when used creatively and responsibly, can offer new forms of protection for journalists operating under threat. We can expect to see more advances in this space as newsrooms and NGOs consider how AI can be a friend, as well as a potential foe. 

(3) Amplifying underreported stories

AI can help journalists discover stories hidden in public documents and data, or by sending alerts when something out of the ordinary has happened. U.K. academic Paul Bradshaw offers a raft of examples of AI in action in this way, on his Online Journalism Blog.

Not only can AI help to identify patterns and anomalies, but it can also help to ensure that reporting on these matters reaches the widest possible audience. The ability of generative AI to create audio summaries, or translate content into different languages, are just two of the ways stories can be disseminated in fresh ways and to those who might otherwise not consume it.

Collectively, these efforts — uncovering important stories and getting as many eyeballs on them as possible — will be fundamental to the continued health of journalism. AI can therefore help in both the impact and distribution of journalism, both of which will be integral to its future. 

Looking ahead

Given AI’s potential to both harm and help journalism, it is essential that we continue to have an active dialogue about the pros and cons of this fast-changing technology. While AI offers tools for efficiency and new products, it is not a panacea for issues of news avoidance, low levels of trust, or the economic woes of the journalism industry. 

Similarly, although it can present challenges for the work that journalists do and how they do it, some of these issues can be mitigated. To do that, it is essential for us to understand how AI works, how it is evolving, and the implications for journalism. Newsrooms and journalists must have access to the right training, policies and research, to combat this. 

The future of media freedom in the AI era will be shaped by choices made by governments, tech companies, civil society — and by journalists and media organizations. Pooling expertise will be essential. As will voices continuing to champion media freedom at a time when AI is shaping, and redefining, this critical issue and the world around us. 

I look forward to witnessing, and potentially being part, of that conversation. 


Photo by Igor Omilaev on Unsplash.