What’s happening on Facebook this midterm election is scarier than ever

Another call to fix Facebook.
Another call to fix Facebook.
Image: Reuters/Aaron P. Bernstein
By
We may earn a commission from links on this page.

Facebook said on the eve of the 2018 midterms that it removed another batch of accounts engaged in misinformation tactics.

Such announcements are supposed to instill its users and observers with some confidence in the platform, following a steady stream of revelations of political manipulation, starting in 2016.

But a deep dive by Columbia University professor Jonathan Albright into tens of thousands of posts and ads provides a grim picture of the state of the threat. “Many of the dangers that were pointed out years ago have seemed to grow exponentially on Facebook, not unlike the other large social media ‘platforms,” Albright writes in a three-part post on Medium.

These issues, he writes, “involve patterns that have been on the radar of the company’s leadership and American politicians since the last election.” Facebook was not immediately available for comment.

Facebook Groups

Perhaps the most unsettling aspect of Albright’s findings is what’s going on in Facebook groups, or what Albright calls “shadow organizing.” Whereas a lot of the election-related manipulation that we learned about in 2016 played out on pages, he says that groups have become the preferred method of spreading misinformation on the platform (see, for example, the case of QAnon conspiracy theories). Albright writes that he “repeatedly encountered examples of extreme content and hate speech that easily violates Facebook’s terms of service and community standards” (emphasis his).

Groups offer a key advantage: the posts are hidden from the public, making it hard to trace the origin of hateful or false content. At the same time, once content leaves the groups, “they can gain traction and initiate large-scale information-seeding and political influence campaigns.”

Many early posts about the migrant “caravan,” Albright writes, originated in Facebook groups.

To make any sort of accountability more difficult to apply, many of the groups have no moderators whatsoever. This also drives up the hyper-partisan and mendacious nature of the content.

To spread posts that might be banned by Facebook, or just to avoid detection, people use a variety of techniques, such as telling others to cut-and-paste individual entries as opposed to sharing them, or even some basic cryptography tricks. “Smoke and info-mirrors that gets people engaged and gives them the impression that it’s secret — and that they are part of some kind of clandestine operation for the greater good.”

Albright writes:

So, we can talk about how scary WhatsApp is in other countries, and how Twitter might play a leading role in the United States elections, but it is Facebook’s Groups—right here, right now— that I feel represents the greatest short-term threat to election news and information integrity. It’s like the worst-case scenario from a hybrid of 2016-era Facebook and an unmoderated Reddit.

Alex Jones’ comeback

He also points to a certain illusion that Facebook has created when it comes to its record on taking down content. The case of Alex Jones , the right-wing conspiracy theorist, is especially telling. After being banned from Facebook in August, Jones has all but made a comeback, banking millions of views on two pages that are not the banned “Infowars,” but “News Wars” and “Infowars Stream.” Facebook’s algorithms promoted the content, Albright says, enabling it to reach more people.

“Where were the people working in Facebook’s highly publicized ‘Election War Room?’” Albright asks. Reactive takedowns, as in the case of Infowars, are not nearly enough, he argues.

On a broader level, Albright posits that Facebook’s removal of what it calls “coordinated inauthentic behavior,”such as the one in October, when Facebook deleted 800 US-based accounts, are likely less about the misinformation itself, but more about the fact that these pages had been gaming the Facebook system, inflating their metrics.

One website, Right Wing News and its affiliates, saw more interactions in a three-year period on Facebook than the pages of the New York Times, Washington Post, and Fox News combined.

The problems with ad transparency

The third category of problems Albright points out is a lack of “recursive accountability.” In other words, the problem is that Facebook doesn’t have a specific process for monitoring Pages that run political campaigns after they’ve been initially verified. This, he writes, renders Facebook election transparency efforts moot.

Each Facebook page has at least one manager, and can have many administrators to manage content—post it, moderate it, run ads, etc. Right now, Facebook requires only one manager account to be verified. Meanwhile, the pages that Albright found have dozens, with many of them based outside the US. What’s more, the number of these managers and their countries of origin fluctuate significantly, and there’s no way for users to know this unless they’ve been monitoring the page on their own. These are influential pages with many followers (LifeZette, for example, a conservative lifestyle outlet, has a million followers) that run political ad campaigns. These pages also run ads that are not labeled as political, some of which ask users in “polls” for such sensitive information as their income (a targeted ad about tax burdens includes an “affordability calculator”).

All of this, Albright writes, forms “a consistent and troubling pattern of opaque reporting on the accounts that are managing the Pages of influential actors who are spending money to actively shape American politics and election results.”