How to detect Russian bots on Twitter

(Credit: Getty Images)

Through an examination of bot activity related to Russian political discussions, researchers have isolated the characteristics of Russian bots operating on Twitter.

Their findings provide new insights into how Russian accounts influence online exchanges using bots, which are automated social media accounts, and trolls, which aim to provoke or disrupt.

“Russia has been at the forefront of trying to shape the online conversation using tools like bots and trolls…”

“There is a great deal of interest in understanding how regimes and political actors use bots in order to influence politics,” explains coauthor Joshua Tucker, director of the Jordan Center for the Advanced Study of Russia at New York University.

“Russia has been at the forefront of trying to shape the online conversation using tools like bots and trolls, so a first step to understanding what Russian bots are doing is to be able to identify them,” he explains.

The findings reveal some notable differences between human and automated posts—but also several similarities, which may stymie bots’ detection.

“Bots are much more likely to use online platforms while humans frequently use mobile devices,” notes coauthor Denis Stukal, a doctoral candidate in the department of politics. “However, humans and bots are not dramatically different from each other on a number of other features that characterize their tweeting activity—similarities that reveal a relatively high level of bots’ sophistication.”

The researchers focused on two specific periods—February 6, 2014 through October 1, 2014 and January 30, 2015 through December 31, 2015—that were notably consequential in Russian politics. They included the Russian annexation of Crimea, conflict in Eastern Ukraine, and the murder of a Russian opposition leader, Boris Nemtsov, in front of the Kremlin.

Their analysis included approximately 15 million tweets sent from about 230,000 Russian Twitter accounts—including 93,000 that were active during both periods.

Interestingly, of those accounts active in both periods, nearly 63,000 (67 percent) were bots. Moreover, among accounts actively tweeting about Russian politics, on the majority of days the proportion of tweets that bots produced exceeded 50 percent—and this figure increased dramatically around the time of the Russian annexation of Crimea.

Other patterns revealed how bots differ from human posts. In addition to distinctions in platform origination (mobile devices for humans vs. the web for bots), which is the best predictor of whether or not a tweet is from a bot, the researchers found the following:

  • Human tweets are more likely to be geo-located.
  • Bots retweet more often than humans do.
  • The most common type of bot is one that tweets news headlines without links to the original source of news.

“This suggests that an important strategy in the use of bots for the purposes of propaganda might be to promote specific news stories and news media in the rankings of search engines,” says Richard Bonneau, director of NYU’s Center for Data Science, of the latter finding.

100 years after Russian Revolution, communism’s impact lingers

However, the findings did not suggest bots are exclusively, or even largely, a tool of the Russian government.

The researchers found that many bots spread pro-regime information, but also that there may be anti-regime bots that either disseminate information about opposition activities or criticize and deride the regime.

The researchers report their findings in the journal Big Data.

The data for the project were collected by NYU’s Social Media and Political Participation (SMaPP) laboratory. The INSPIRE program of the National Science Foundation (SES-1248077) supports the SMaPP laboratory.

Source: New York University