How to avoid another Russian-troll misinformation nightmare in 2020

Russian trolls and online bots work in insidious ways.
Russian trolls and online bots work in insidious ways.
Image: AP Photo/Jon Elswick
We may earn a commission from links on this page.

We typically think of social media platforms—Facebook, Reddit, Twitter, etc.—as distinct from one another.

We tend to visit each one at a time, after all. Still, messages from one platform often reappear on others. And, if you’re trying to get attention on multiple social media platforms, chances are you’ll coordinate different messages, or link your profiles on different accounts, to create a consistent and identifiable “brand.”

This is true for most users. But it’s also true for the trolls and bots that drive disinformation campaigns.

This phenomenon helps explain what happened when the Russian Internet Research Agency (IRA) targeted US social media during and after the 2016 US presidential election. During that process, Russia’s troll army was—and likely still is—active on multiple platforms, including those three mentioned above.

Time-series analysis, Russian troll edition

According to a study I published this year from the University of Wisconsin, Madison, these Russian trolls are more coordinated than we thought when the news first broke.

My research led to two important findings:

First, from 2015 to 2017, the Russian IRA posted many English-language tweets (nearly 1.9 million).

This may be because Twitter is a popular space for journalists and political actors to glean information, which provides an opportunity to amplify disinformation. The IRA also posted often on Reddit (about 12,600 comments and posts) and paid for many Facebook ads (more than 3,000).

Second, using a statistical technique called time-series analysis, the study shows that Reddit activity preceded Twitter activity. In other words, when Russian trolls posted a lot on Reddit, Russian trolls on Twitter were likely to tweet more the following week.

One explanation is that Russian IRA trolls used Reddit to test-drive their messages. Subreddits, the topically-focused Reddit communities, make it easy to find and talk to politically like-minded individuals, and therefore provide ideal testing grounds for disinformation.

When a message was successful on Reddit, the trolls would then post a similar (or even the very same) message on Twitter.

For example, on June 2, 2016, the Russian Reddit troll account “Maxwel_Terry” published a link entitled: “If Hillary is nominated I’m voting against any Democrat who supported her” to the subreddit “/r/HillaryClintonSucks.” The link directed readers to a Left-leaning blogger who stated, as you may have guessed, that he would not be voting for Clinton.

Three days later, the same link was tweeted by the IRA Twitter account @MissouriNewsUs with the same message, with the inclusion of the hash tag “#NeverHillary.” (The account, like many other known troll accounts, and therefore the messages I’m referencing, are unavailable because they have been suspended.)

Though my study focuses on Facebook advertising as a vehicle for disinformation, a more in-depth look at the evidence reveals that we need to understand the complex relationships between non-paid Russian disinformation on Reddit, Twitter, and Facebook to make sense of it all.

In another example, one Reddit troll account posted a Facebook event on the subreddit /r/Bad_Cop_No_Donut with the caption, “Join the protest against police brutality. April 21 Student Day of Action to #StopPoliceTerror.”

It was posted to Twitter four days later by the Russian troll account @dontshootcom along with the hashtag “#stoppoliceterror.”

This Facebook event was shared by Russian troll Twitter and Reddit accounts.
This Facebook event was shared by Russian troll Twitter and Reddit accounts.
Image: The Wayback Machine Archive/Courtesy of Josephine Lukito

The event was planned by the Stop Mass Incarceration Network. The organization does not appear to be connected to the IRA operation, and likely did not even realize Russian trolls were among its followers.

This pattern was also noticeable among Russian IRA accounts that were explicitly related (e.g., a fake organization with multiple social media accounts). One pertinent example of this is the Twitter account of someone who claimed to be named Jena Abrams, who also maintained a blog pretending to be a young, conservative American woman.

Another example is @BlackToLive. Under this account name, Russian IRA trolls created profiles on Twitter, Reddit, Tumblr, Medium, and Youtube (there is an Instagram account that uses the handle, but it’s unclear whether this account is connected to the Russian IRA). The Twitter account also promoted an active email address, BlackToLive@gmail.com.

The user also cross-promoted its (now-defunct) website, blacktolive.org, on both Reddit and Twitter.

Screenshot of the blacktolive.org “About us” page.
Screenshot of the blacktolive.org “About us” page.

Posts from this user were often shared within a few days of each other. First, BlackToLive would post content from its own website on different subreddits, including /r/uspolitics, /r/politics, /r/Bad_Cop_No_Donut, and /r/HillaryClinton. Then four days later, the account posted the content on Twitter.

Looking ahead to 2020

But what does this research from 2016 mean for us now?

With the 2020 presidential election underway, it is likely that the IRA will target US social media, if they are not already active.

We need to take what we learned from past years to make smarter decisions for our next election. And we can create new communication norms that encourage fruitful online political discourse and discourage troll-ish behavior.

For one, social media organizations need to be more transparent about their process for dealing with incorrect information, especially when it comes to state-sponsored troll accounts. Twitter, for example, regularly releases tweets from state-sponsored information actors; but there is little public knowledge about how these accounts are verified.

Though Facebook has made efforts to be forthcoming about its advertising data, its organic content is far harder to study and therefore harder to authenticate.

Perhaps if we openly recognize that political disinformation cannot be completely eradicated from social media, we can find a better way to measure and account for it.