Facebook’s design is quietly training us all to be conspiracy theorists

Do you know what you’ve done?
Do you know what you’ve done?
Image: Reuters/Mariana Bazo
We may earn a commission from links on this page.

Recently, tech blog BGR graciously linked to an older post of mine showing that the scale at which fake news stories trend on Facebook can dwarf traditional news in terms of shares. The BGR story ended with this paragraph:

On a related note, it stands to reason that most individuals prone to believing a hyperbolic news story that skews to an extreme partisan position likely already have their minds made up. Arguably, Facebook in this instance isn’t so much influencing the voting patterns of Americans as it is bringing a prime manifestation of confirmation bias to the surface.

As regular readers know, my core expertise is not data analysis of Facebook, but in how learning environments (and particularly online learning environments) affect the way users think, act, and learn. A long time ago I was online political organizer, but my day job for many, many years has been the investigation and design of net-enabled learning experiences.

The BGR conclusion is a common one, and it intuitively meshes with our naive understanding of how the mind perceives truth. I see something I agree with and I reshare it—it doesn’t change my mind, because of course I already believed it when I reshared it. However, from a psychological perspective there are two major problems with this.

Saying is believing

The first problem is that saying is believing. This is an old and well-studied phenomenon, though perhaps understudied in social media. So when you see a post that says “Clintons suspected in murder-suicide” and you retweet or repost it, it’s not a neutral transaction. You, the reposter, don’t end that transaction unchanged. You may initially post it because, after all, “Whoa if true.” But the reposting shifts your orientation to the facts. You move from being a person reading information to someone arguing a side of an issue, and once you are on a side of the issue, no amount of facts or argument is going to budge you. This may have been built into the evolution of our reason itself. In this way, going through the process of stream curation is at heart a radicalizing process.

I’ve said many times that all social software trains you to be something. Early Facebook trained you to remember birthdays and share photos, and to some extent this trained you to be a better person, or in any case the sort of person you desired to be.

The process that Facebook currently encourages, on the other hand, of looking at these short cards of news stories and forcing you to immediately decide whether to support or not support them trains people to be extremists. It takes a moment of ambivalence or nuance, and by design pushes the reader to go deeper into their support for whatever theory or argument they are staring at. When you consider that people are being trained in this way by Facebook for hours each day, that should scare the living daylights out of you.

It’s worthwhile to note as well that the nature of social media is we’re more likely to share inflammatory posts than non-inflammatory ones, which means that each Facebook session is a process by which we double down on the most radical beliefs in our feed.

In general, social media developers use design to foster behaviors that are useful to the community. But what is being fostered by this strange process that we put ourselves through each day?

Think about this from the perspective of an Martian come to Earth, watching people reach for their phone in the morning and scrolling past shared headlines, deciding in seconds for each one whether to re-share it or comment or like it. The question the Martian would ask is: “What sort of training software are they are using, and what does it train people to do?”

And the problem is that—unlike previous social sites—Facebook doesn’t know, because from Facebook’s perspective they have two goals, and neither is about the quality of the community or well-being of its members. The first goal is to keep you creating Facebook content in the form of shares, likes, and comments. Any value you get out of it as a person is not a direct Facebook concern, except as it impacts those goals. And so Facebook is designed to make you share without reading, and like without thinking, because that is how Facebook makes its money and lock-in, by having you create social content (and personal marketing data) it can use.

The second Facebook goal is to keep you on the site at all costs, since this is where they can serve you ads. And this leads to another problem we can talk about more fully in another post. Your average news story — something from the New York Times on a history of the Alt-Right, for example — won’t get clicked, because Facebook has built their environment to resist people clicking external links. Marketers figured this out and realized that to get you to click they had to up the ante. So they produced conspiracy sites that have carefully designed, fictional stories that are inflammatory enough that you *will* click.

In other words, the consipiracy clickbait sites appeared as a reaction to a Facebook interface that resisted external linking. And this is why fake news does better on Facebook than real news.

To be as clear as I possibly can—by setting up this dynamic, Facebook simultaneously set up the perfect conspiracy replication machine and incentivized the creation of a new breed of conspiracy clickbait sites.

There will be a lot of talk in the coming days about this or that change Facebook is looking at. But look at these two issues to get the real story:

  • Do they promote deep reading over interaction?
  • Do they encourage you to leave the site, even when the link is not inflammatory?

Next time you’re on Facebook you’ll notice there are three buttons at the bottom of each piece of content, outlining the actions you can take on that content. None of them says “read”.

Facebook makes conspiracies familiar, and familiarity equals truth

It would be terrifying enough if this existential problem—that Facebook was training people to be extremists and conspiracists through a process of re-sharing—was the worst bit. But it’s not. The larger problem is the far larger number of people who see the headlines and do not reshare them.

Why is this a problem? Because for the most part, our brains equate “truth” with “things we’ve seen said a lot by people we trust.” The literature in this area is vast—it’s one of the reasons, for example, that so many people believe that global warming is a hoax.

If you think about it, it’s not a bad heuristic for some things. If someone tells you that there’s better farmland over the mountain, and then another person tells you that, and then another person, then your mind starts seeing this as more likely to be true than not, especially if the folks telling you this are folks you trust. If someone tells you that there’s gold mines under every house in the town, but they can’t be touched because of federal laws—well that’s not something you hear a lot, so that’s obviously false.

You say—no, that’s not it at all! The gold mines are ridiculous, that’s why I don’t believe in them! The good farmland is logical! I’m a logical being, dammit!

I’m sorry, you’re not, at least on most days. If enough people told you about the gold mines, and every paper in the town made passing reference to the gold mines you would consider people who didn’t believe in the hidden gold mines to be ridiculous. Want examples of this? Look at the entire sweep of history.

Or look at Lisa Fazio’s work on the illusory truth effect. The effect of familiarity is so strong that giving you a multiple choice question about something you know is wrong—“The Atlantic Ocean is the largest in the world, True or False?” can actually increase your doubt that you are right, and even convince you of the false fact. Why? Because it counts as exposure to the idea that other people believe the lie. This is why Betteridge Headlines are so dangerously unethical. If I publish headlines once a week asking “Is There Gold Buried Under Sunnydale?” with an article that says probably not, eventually when you are asked that question by someone else your mind will go—you know, I think I’ve heard something like that.

All it takes is repetition. That’s it.

Don’t believe me? Go check out how many people believe Obama is a Muslim. Or ask yourself a question like “Does Mike Pence support gay conversion therapy?” without looking up anything and then ask yourself how it is that you “know” that one way or the other. If you know it, you know it because it seems familiar. A lot of people you trust have said it. That’s it. That’s all you got.

If Betteridge Headlines are borderline unethical, then Facebook is well over that line. The headlines that float by you on Facebook for one to two hours a day, dozens at a time, create a sense of familiarity with ideas that are not only wrong, but hateful and dangerous. This is further compounded by the fact that what is highlighted on the cards that Facebook is not the source of the article, which is so small and gray as to be effectively invisible, but the friendly smiling face of someone you trust.

Image for article titled Facebook’s design is quietly training us all to be conspiracy theorists

The article that intimated Clinton may have murdered an FBI agent and his wife and burned them in their home to conceal the evidence was shared by over half a million people. If we assume a share rate of 20-to-1 that means that over 10 million people were exposed to the idea that this actually happened. If we assume that there is a lot of overlap in social networks, we can assume that they were exposed to it repeatedly, and it came from many people they trusted.

And that’s just one headline. People exposed themselves to Facebook multiple times a day, every single day, seeing headlines making all sorts of crazy claims, and filed them in their famil-o-meter for future reference. We’ve never seen something on this scale. Not even close.

If Facebook was a tool for confirmation bias, that would kind of suck. It would. But that is not the claim. The claim is that Facebook is quite literally training us to be conspiracy theorists. And given the history of what happens when conspiracy theory and white supremacy mix, that should scare the hell out of you.

I’m petrified. Mark Zuckerberg should be too.