Twitter bots favored Trump leading up to election

Updated


Two researchers from the University of Southern California's Information Sciences Institute published an interesting study shortly before the 2016 presidential election showing that 15 percent of all political tweets in the month beforehand were from roughly 400,000 automated social media bots.

Twitter bots are computer programs based on artificial intelligence that are able to mimic tweets by human beings. They are able to retweet and post based on phrases or key words. They are sophisticated enough that it can be difficult to detect whether something has been generated by a human being or an automated robot.

Political campaigns have used social media bots before – but never to this extent and never in such wide service to the aims of a candidate such as the GOP nominee, Donald Trump, the researchers wrote. The political bot tweets, which were sheer in number across the board, were relentlessly positive about Trump, while the bot tweets on the other side of the political aisle included a significant number of highly negative attacks on the Democratic nominee, Hillary Clinton.

%vine-url="https://vine.co/v/5t0PzOIIXBZ"%

The study – which looked at all political tweets during the course of a month in order to separate out the automated social media bots from human accounts - made a bit of news at the time, before fading into the background against one of the most stunning presidential elections in American history.

But at least one of the study's important findings is now worth revisiting in light of the heightened focus on election recounts in three Midwestern states where hacking concerns have been raised. There are also allegations of massive illegal voter fraud in southern states made by President-elect Trump that have been reported on entirely by fake news accounts.

One of the study's findings overlooked in the initial media coverage just prior to the presidential election was that a huge collection of the automated bot tweets came from just one state in the United States – the state of Georgia.

"Sophisticated bots can make credible accounts by faking profile information, and other metadata, including the geographical provenance, using techniques like GPS spoofing," wrote the USC researchers, Alessandro Bessi and Emilio Ferrara.

"We plotted the U.S. map reporting the volume of tweets generated by each state, respectively for bots and human accounts. The two maps tell significantly different stories: a very strong support from bots is evident in the Midwest and South of the United States, in particular in Georgia," they wrote. "The picture for human-generated tweets' provenance is very different, and it shows that the drivers of the conversation are the most populated states, such as California, Texas, Florida, Illinois, New York state, and Massachusetts."

In fact, if you look at the concentration of automated political social bots originating in Georgia, according to the USC study, it's considerably greater than any other state in the country.

One has to wonder, why is Georgia suddenly the capital of bot tweets?

The researchers were unable to identify who was responsible for any of the 400,000 artificial intelligence social bots that tweeted night and day about the presidential election. They also had no explanation for why so many of them originated in Georgia. But an incident nearly two years ago may, in fact, shed some light on why Georgia might matter, and why so many of the AI social bots originated from the Peach State.

In June 2015, the New York Times published an extraordinary story by Adrian Chen about how an army of well-paid "trolls" – working from a nondescript office building in St. Petersburg, Russia - had tried to wreak havoc in several targeted American communities and, eventually, across the internet.

One of the Russian troll factory's earliest experiments was an effort to generate fake news about the appearance of the Ebola virus in and around Atlanta, Georgia – and then spread it across the internet through the use of social media, Chen reported.

That fake news effort by the Russian troll factory was followed almost immediately by a second fake news story, also in Georgia, that spread throughout social media about an unarmed black woman shot by police in Atlanta.

Both stories in Georgia were generated by fake news that piggybacked on real concerns in America. Both fake news stories were created by the Russian troll factory, Chen wrote, and then socialized from there. Both were almost certainly experiments by the troll factory that someone in Russia had decided to test two years out from the presidential election.

"On Dec. 13 (2014), two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread (an earlier hoax) began to post about an outbreak of Ebola in Atlanta," Chen wrote. "The campaign followed the same pattern of fake news reports and videos, this time under the hashtag #EbolaInAtlanta, which briefly trended in Atlanta. (The) attention to detail was remarkable, suggesting a tremendous amount of effort."

Simultaneously, Chen wrote, the Russian troll factory tested a second fake news story in Georgia.

"On the same day as the Ebola hoax, a totally different group of accounts began spreading a rumor that an unarmed black woman had been shot to death by police. They all used the hashtag #shockingmurderinatlanta. Here again, the hoax seemed designed to piggyback on real public anxiety," he wrote.

Chen explained that, in the beginning, he was mystified about who might be behind the two obvious test efforts originating in the state of Georgia. Eventually, though, he pieced it together.

"Who was behind all of this?" Chen asked. "When I stumbled on it last fall...I was already investigating a shadowy organization in St. Petersburg, Russia, that spreads false information on the internet. It has gone by a few names, but I will refer to it by its best known: the Internet Research Agency. The agency had become known for employing hundreds of Russians to post pro-Kremlin propaganda online under fake identities, including on Twitter, in order to create the illusion of a massive army of supporters; it has often been called a 'troll farm.'"

There are several threads here:

  • National security experts have said they believe, with a high level of certainty, that the Russian government was responsible for hacks into at least four different arms of the national Democratic Party prior to the election that inflicted serious damage to the candidacy of Democratic nominee Hillary Clinton.

  • Adrian Chen's stunning piece in the New York Times in June 2015 about a Russian "troll" factory identified two of the earliest efforts to test the reach of fake news accounts and social media efforts in Georgia to see how well such a coordinate effort might work in America.

  • A day before the 2016 presidential election, the USC researchers published a study that said an extraordinarily high concentration of social media bots that helped coordinate massive retweets of pro-Trump messages had originated from Georgia in the month prior to the 2016 presidential election, the very same state where the early Russian troll factory efforts first began.

It's always dangerous to equate correlation with causation, but it's hard not to link these activities. To be fair, there may not be a connection between the Russian troll factory's early efforts in the state of Georgia to test fake news and social media disinformation two years out from the 2016 presidential election, and an onslaught of pro-Trump social media bot efforts also emanating from the state of Georgia in the month prior to the election.

At a minimum, however, I would argue that it is at least prudent for national security officials who pay attention to such things to ask just how much the Russian government attempted fake news and AI bot social media disinformation efforts in the run up to the 2016 American presidential election.

Advertisement