How media in Taiwan has adapted to combat electoral disinformation

by Rowan Philp
Apr 17, 2024 in Combating Mis- and Disinformation
Taiwan's flag on a city street

As dozens of countries brace for an onslaught of local and foreign digital disinformation campaigns in their elections in 2024, Taiwan’s recent experience offers useful lessons for journalists and democracy defenders elsewhere — as well as some much-needed hope.

Despite its January 13 election being assailed by a deluge of online disinformation — particularly, false voter fraud claims and dire warnings of future war from bad actors in China — new research and independent journalistic accounts reveal that local media, election authorities, and fact checkers in Taiwan were largely successful in repelling assaults, with techniques such as “prebunking,” smart communications regulation, and a deliberate focus on media trust. After the vote, Lai Ching-te — head of the pro-sovereignty Democratic Progressive Party, which is opposed by Communist China’s autocratic government — was elected president.

However, Taiwan’s election revealed disinformation trends for journalists and fact checkers in other countries to flag. These patterns included the use of generative AI in deepfakes, propaganda amplification by popular YouTube influencers, and foreign information operation narratives designed to undermine trust in democracy itself, rather than to promote individual candidates.

In a webinar titled Disinformation and AI: What We’ve Learned from the 2024 Elections So Far, organized by the Thomson Foundation, three expert panelists shared insights for journalists on ongoing disinformation threats, and lessons from Taiwan.

The speakers included Professor Chen-Ling Hung, director of the Graduate Institute of Journalism at National Taiwan University; election expert Rasto Kuzel, executive director at MEMO 98; and Jiore Craig, Resident Senior Fellow at the Institute for Strategic Dialogue. The session was moderated by Caro Kriel, chief executive at Thomson Foundation.

Use CrowdTangle’s link checker while you still can 

Reporters should make use of the final months of CrowdTangle’s powerful Link Checker extension, which can show Facebook and Instagram posts that mention a link you search for, with visibility to source articles, YouTube videos, and other platforms.

In contrast to the positive takeaways from Taiwan’s resilience, one gloomy note from the panel was their shared distress at Meta’s scheduled termination of its CrowdTangle tool, on August 14, 2024. CrowdTangle is the best digital tool investigative reporters have had in recent years to track and trace online disinformation campaigns around the world.

“My organization benefited from access to CrowdTangle by Meta, and we are very worried about what will happen after August, when Meta is supposed to stop this tool, which has been absolutely crucial for our research,” explained Kuzel. “What CrowdTangle provided was access to public accounts and public groups, and this is something we will continue to need.”

However, Craig encouraged reporters to use this disappointing cut-off point as an investigative deadline in which to use the dashboard intensively to track current or emerging disinformation campaigns involving any election in 2024. In particular, she recommended that reporters should make use of the final months of CrowdTangle’s powerful Link Checker extension, which can show Facebook and Instagram posts that mention a link you search for, with visibility to source articles, YouTube videos, and other platforms.

“It is an amazing tool, and one of the only tools we had that helped with attribution,” Craig said of the LinkChecker extension. “So install that plug-in while you still can, to look at attribution for any online property or any news source that you’re unsure about.”

Reporters can still find useful data via the Meta Content Library, and disinformation leads from newer, journalist-made tools such as Junkipedia — which has access to a dozen social media platforms, including fringe sites, as well as other features. However, Kuzel warned that “the pull-back on data from Meta will also affect Junkipedia.”

(For a detailed breakdown of new election disinformation threats — and the tools to unmask them — please see the Political Messaging and Disinformation chapter in GIJN’s Revised Elections Guide for Investigative Reporters. Also, for tools to tackle audio deepfakes, please see GIJN’s comprehensive tipsheet How to Identify and Investigative AI Audio Deepfakes, a Major 2024 Election Threat.)

Taiwanese public media’s unified response to foreign influence operations

Commissioned by the Thomson Foundation, the report on fighting election disinformation in Taiwan’s most recent election — co-authored by Chen-Ling Hung and three colleagues — found both coordinated digital attacks and also a largely unified response to that propaganda wave.

It noted that “as the electoral period in Taiwan approached… the troll groups’ messaging became increasingly alarmist, focusing on war rhetoric aimed at intimidating the Taiwanese populace.” According to research by Taiwan AI Labs: among troll group messaging that amplified propaganda from Chinese state media, “the portrayal of an imminent Chinese military threat was most prevalent, accounting for 25% of the echoed content. This was closely followed by narratives suggesting the US was manipulating Taiwan into a precarious military confrontation, which formed 14.3% of the discourse.”

Likely foreign-origin disinformation also included malicious personal smears, including false claims about one candidate’s “illegitimate child,” and the release of a 300-page ebook with false claims about the incumbent president and sexual misconduct.

The ebook was initially confusing for Taiwanese foreign information ops researchers, because — as DoubleThink Lab’s Tim Niven told Foreign Policy magazine — “This is the social media age. Nobody’s reading a spammy book that someone spammed into their email.”

But researchers quickly saw that the ebook really served as “a script for generative AI videos” and was used as a supposedly legitimate source for misinformation campaigns. This took the form of dozens of videos appearing on Instagram and TikTok, with AI-generated “avatar” newscasters and influencers using the book as an apparently authoritative source, and reading out short sections of the text.

The election research included interviews with five major news organizations on their strategy for countering disinformation: Taiwan Public Television Service (PTS), Central News Agency (CNA), Radio Taiwan International (RTI), Formosa Television (FTV) and TVBS Media. The authors stated that public media tended to collaborate with fact-check organizations, and had “demonstrated a strong commitment to authentic news, using advanced technology and various strategies [to identify and debunk disinformation].” They noted that some commercial media “struggled more with disinformation [than public media] due to political biases and profit motives,” but that internal verification efforts were generally made, and traditional journalism questions applied, to check audio and video claims, in particular.

The report cited the head of CNA’s newsroom saying that “posts from TikTok, which is regarded as deeply influenced by the Chinese government, were hardly cited by CNA’s news coverage.”

Taiwanese fact checkers generally responded quickly to suspicious claims — but sometimes not fast enough. “What struck me in the report was one example of an AI deepfake in the election — but it took seven days for a definitive answer to come out,” noted Kriel. “That is really lost time, when conspiracy theories flourish.”

“We’ve also seen deepfakes in other country contexts, from Pakistan to the UK, Slovakia, the US, and elsewhere, and also outside of the election context — in Sudan, to promote the aims of people engaging in the civil war there,” Craig explained. “It’s taking an online threat landscape that was already bad, and making it worse.” (See GIJN’s recent webinar on Investigating Elections: Threat from AI Audio Deepfakes.)

Despite some weak points and limited newsroom resources, Chen said early messaging and a combined effort between sectors had largely protected the election information environment in Taiwan from being overrun with false claims and distorted messaging.

“It needs cross-sector cooperation to fight disinformation — fact checkers,  government, traditional media, digital platforms, and civil society,” explained Chen. “Advanced techniques and tools were used to combat disinformation, and some organizations invested in techniques to solve AI disinformation, but it is not enough — we need to invest more.”

Chen said voters were generally primed to be on the lookout for false claims and misleading methods thanks to preliminary warnings by media and civil society in the weeks and months before election day, creating a “gradual improvement of Taiwanese people’s awareness and vigilance against disinformation.”

Why earned media trust is the ultimate defense

“The example from Taiwan is a great demonstration of the power of trusted messengers in responding to AI-generated disinformation threats,” Craig said. “Media or any messenger that earns its audience’s trust has the opportunity to make impactful choices when a disinformation attack presents.”

She added: “To me, earning trust means both disclosures and transparency, as well as prioritizing the voter audience — over only a peer audience, for example — to meet voters where they are receiving information in 2024. For example, shorter-form over only longer-form, and radio, podcasts, etc.”

Recognizing and formulating responses to misleading messaging is key, because early misinformation themes can otherwise take root.

The speakers noted that consistent accuracy in other election reporting is crucial to earn the credibility necessary to debunk “flashpoint” disinformation efforts on the eve of voter registration deadlines and election days, when the threat of deepfakes and inflammatory claims are the most acute.

Rather than attempting to create political converts, Kuzel said the goal of most election disinformation campaigns in 2024 was general political disengagement “and to kill activism.”

Craig agreed. “As bad actors aim to break down our trust, that makes us insecure, and that makes us emotional — and then we’re tired,” she explained “And when we’re tired, we’re easier to control. It’s pushing us into a disengaged place.”

Kuzel said anticipating, identifying, and prebunking damaging election falsehoods is feasible because bad actors often tip their hand months before the election cycle begins in earnest. Recognizing and formulating responses to misleading messaging is key, because early misinformation themes can otherwise take root.

“What we saw in the 2020 US elections was that manipulative narratives started circulating one year before the elections, with [Donald] Trump saying there may be manipulations of the postal vote —  and then amplified these afterward, when he did not recognize the results,” Kuzel noted. “We need to understand the threats, and inoculate the public against these coming campaigns.”


Photo by Lisanto 李奕良 on Unsplash.

This article was originally published on GIJN and republished on IJNet with permission.