Ban the bot: the true force behind malicious disinformation online

Rhodri Marsden speaks to university professor Kathleen Carley about what is needed to combat bots on the internet

FILE - This April 26, 2017, file photo shows the Twitter app icon on a mobile phone in Philadelphia. Twitter is enlisting its users to help combat misinformation on its service by flagging and notating misleading and false tweets. The pilot program unveiled Monday, Jan. 25, 2021 called Birdwatch, allows a preselected group of users — for now, only in the U.S. — who sign up through Twitter. Those who want to sign up must have a U.S.-based phone carrier, verified email and phone number, and no recent Twitter rule violations. (AP Photo/Matt Rourke, File)
Powered by automated translation

The phenomenon of climate change and its causes are broadly agreed upon by scientists. The same, however, cannot be said for the public. The sheer volume of dissenting voices and polarised opinion on the internet caused one post­doctoral researcher at NYU Abu Dhabi, Thomas Marlow, to investigate one of the suspected culprits – automated bots.

His findings, published last month, are striking: bots were responsible for 25 per cent of all Twitter posts on the topic over a two-month period. That's machines directed by humans to spread disinformation. The effects of this are hard to measure, but Marlow's suspicion is that it may be draining support from policies to address climate change.

Intentionally malicious bot activity is by no means limited to this topic. A 2019 study by the University of Oxford entitled The Global Disinformation Order found that 70 countries were being targeted by political disinformation campaigns, including the US, the UK and the UAE. But by far the biggest swell of disinformation has surrounded the coronavirus pandemic, with bots responsible for somewhere approaching 50 per cent of Twitter posts on the topic.

“During many disasters and many elections throughout the world, you’ll see a few dozen disinformation stories,” says Kathleen Carley, a professor in the School of Computer Science at Carnegie Mellon University in the US.

MC, Comm, Women in Cyber Security, Kathleen Carley, March 11 2019
Kathleen Carley, a professor in the School of Computer Science at Carnegie Mellon University in the US. Carnegie Mellon University

"But during this pandemic, there's been over 7,000. Without some better structure [to tackle the problem], we're not going to get back to the lower levels we had prior to the pandemic."

What is the aim of these bots? In his book The Hype Machine, author Sinan Aral puts it simply: "Misleading humans … humans who vote, protest, boycott products, and decide whether to vaccinate their kids."

But the reasons for sowing that confusion are many and various, says Carley. "Some people do it literally because they're bored and they think it's funny. In other cases, it's being spread explicitly as a way of undermining or attacking at-risk groups – ethnic, religious or sexual minorities. It might be done to create civil discord or help grow a cult or a group."

And it works. Bots sow the seeds of confusion, and humans run with it, sharing and retweeting posts far and wide. It's been proven by many researchers that disinformation spreads more quickly than the truth. It gets more clicks and more likes because of what Aral terms the "novelty hypothesis".

A lot of disinformation can affect you emotionally and psychologically. People start mistrusting institutions in general, which makes them more likely to, say, take the law in their own hands.

Surprising information attracts our attention, and we share it because we like to be seen as “in the know”. The mechanic is well established, and we continue to facilitate it. We’re not as good at spotting disinformation as we think we are. “From a political standpoint, conservatives are better at spotting disinformation from liberals,” says Carley. “If you’re a liberal, you’re more likely to spot a bot that is pushing conservative rhetoric.” In effect, we pounce upon and propagate disinformation that confirms our existing biases.

Pinpointing the real world effects of this isn't easy. It undoubtedly creates false consensuses, as fringe theories start to be interpreted as being widely believed. Carley says this may lead to extreme behaviour, which proves the influence of bots.

“When someone commits an egregious act, you can go into their email and their browser history and see that they were paying attention to this serious disinformation,” she says. “But there are other consequences. When your students don’t check the facts they’re getting from these disinformation sites, they get worse grades. A lot of disinformation can affect you emotionally and psychologically. People start mistrusting institutions in general, which makes them more likely to, say, take the law in their own hands.”

A number of tools, such as Hoaxy and Botometer, have been developed by researchers to distinguish bot posts from real ones. Bots tend to be betrayed by the content they post and the frequency with which they do it. But if researchers’ tools can detect them (with up to 95 per cent accuracy, in the case of Botometer), why can’t the social media platforms? Twitter has, in the past, disputed some claims of bot activity, calling it “misleading and inaccurate”.

Carley, however, says that accounts identified as bots in her own study were actually taken down by Twitter. "They're under a lot of pressure to change, and we've seen some movement in that direction because they don't want to be policed by the government," she says. "So they're trying to do their own internal policing, but that means they become the dictators of what is and is not free speech. Which is not a good thing."

When one bot account is removed, another inevitably replaces it, making the problem incredibly hard to combat. Carley believes that it requires a combination of self-regulation on the part of the social media platforms, some formal government regulation, better education on how to spot disinformation and communities banding together to call it out. At the end of last year, Twitter unveiled plans to introduce, at some point this year, a way for users to distinguish bots from real accounts.

But for their part, bots will become more cunning in order to evade detection, developing personas to look more authentic and even gaining the ability to debate issues. Our job, as users, is simply stated, but far from easy: to keep our wits about us.