While bots are one of the major trends in digital communication, many users don’t know much about what they are — or their power to influence public opinion through spreading disinformation and propaganda.
According to Giovanni Luca Ciampaglia, a research scientist from Indiana University, bots are a “simple computer program that operates one or multiple social media accounts to perform some action on social media.” Although many of these programs are used for marketing, the misuse of bots is challenging journalists in new ways.
Types of bots
One common type of bot is a chatbot, which is programmed to respond to users’ messages. Chatbots have been used to improve companies’ customer service for many years, saving them money and creating faster, more effective service.
Media outlets like CNN have created their own chatbots to deliver news in a one-on-one messaging format. Bots may send users a morning briefing, answer questions about a specific matter or even recommend recipes.
Chatbots usually inform users that they’re a computer; however, some bots are programmed to conceal this fact in order to appear real. This is common among social bots, a type of chatbot that automatically produces content on social media, usually in support of campaigns, brands, politicians and more.
“It usually tries to act like a human,” said Ciampaglia. “Some of [these bots] are used to follow certain people on Twitter, and others are used to spread information [that] supports a group of people.”
As a result, influencers and politicians often have millions of fake followers. Although these bots are commonly used to increase users’ social media followings, they are also an effective tool to spread false information.
Other kinds of accounts that spread fake news exist, but they aren’t computer programs.
“Accounts that are created by humans and used for malicious purposes are called cyborgs,” said Ciampaglia. These are real people who get paid to create dozens of fake accounts that are used to influence and manipulate public opinion on social media, especially on Facebook and Twitter.
Many marketing agencies and governments use cyborgs to make money by manipulating information about politicians, brands or other powerful groups. The Internet Research Agency in the United States and Victory Lab in Mexico both create and spread disinformation by using cyborgs to manipulate conversations on social media.
These groups hire young people to create fake accounts and pay them for ambiguous or falsified services to keep it undercover. Although it’s not ethical or legal to use cyborgs to manipulate public opinion, many young people do this work, mostly for money and less for political interests.
Alberto Escorcia is a Mexican internet activist who has fought these groups of cyborgs, which he calls “decks,” since 2010. As a result of his work against fake users, he received death threats to the point that he was almost forced to leave the country.
“Bots fill social networks with messages either supporting or fighting a cause,” said Escorcia. “It’s a brute force task, inundating social media, creating trending topics and making it seem real.”
These decks work in a very organized way, collaborating between fake accounts in order to fool users on social networks. “The simulation they create is massive,” said Escorcia. “Many times you just need thousands of bots commenting on or sharing the same content, and someone will think it’s legit, even if it doesn’t make sense.”
Bots: the secret of fake news
Malicious actors use bots to manipulate the conversation during political events or elections. In the 2016 U.S. presidential election and the Brexit vote, bots may have had an impact on the outcomes. During Mexico’s 2018 elections, millions of bots diffusing false information about the candidates were detected.
However, Escorcia doesn’t see bots as the only cause behind fake news. “Bots are the last link of the chain,” he said. “The major responsibility falls on Twitter, Google and Facebook, because they have the resources to detect fake accounts.”
But because bots bring millions of dollars in ad revenue to tech companies and social networks, these sites have little incentive to take action against them, Escorcia said.
“Their main income is ads,” said Escorcia. “It doesn’t matter if they are promoting fake news.”
“The use of bots is not decreasing; on the contrary, it is increasing,” said Ciampaglia. “Not only because of the marketing opportunities, but also to manipulate the public opinion.” These bots are also getting smarter, he said.
Escorcia is concerned that bots, when combined with artificial intelligence, will be used as information weapons, putting entire countries at risk of manipulation. He believes that in 10 years, it will be even harder to distinguish between reality and lies.
Media outlets are facing a potential crisis. Reporters need to be ready by educating their readers about fake news, learning more about technology and forming verification groups during big events such as elections. One example is Verificado 2018, a fact-checking initiative led by different newsrooms in Mexico. The team used platforms like WhatsApp to intervene fake news during the national elections.
“Investigative journalism should take this issue seriously. It may look like science fiction, but it’s a direct threat against the industry,” said Escocia. “Journalists don’t have to give into an industry that is willing to pay money to distort reality.”
The only way to prepare for this imminent scenario is to team up with technologists like Escocia and Ciampaglia, developing tools and techniques to fight bot armies while merging investigation and coding capacities.
Fortunately, it’s already happening. Ciampaglia and a team of researchers from Indiana University Observatory on Social Media have developed tools to recognize fake accounts across social networks. Their tools, Hoaxy and Botometer, provide an easy way to detect low-credibility websites and recognize automated accounts for journalists and average users.