In “Hated in the Nation,” an episode of the dystopian TV series Black Mirror, social-media users seek vengeance on individuals who have violated social norms by using the hashtag #DeathTo. The backlash against Facebook in the aftermath of the Cambridge Analytica scandal has prompted a similar movement—except in this case, the target is the social-media platform itself.
The #DeleteFacebook campaign gives voice to widespread concern over the lack of privacy in digital environments. The movement also articulates the outrage of a public that feels its needs are furtively superseded by financial goals.
The irony of the #DeleteFacebook movement is obvious. The use of hashtags, after all, is a convention that emerged on social media as a way to organize collective attention around information streams. Now, hashtags are being used as a sort of self-destruct message.
A different version of the same paradox arose during the 2011 Occupy campaign, which sought to challenge the establishment but also relied, for the most part, on media channels owned by the same actors the movement was criticizing. You can try to hijack a network of private interests, but it is difficult to do so without in some way serving those interests as well.
The #DeleteFacebook movement is a form of online activism that is ultimately self-defeating. But there is an alternative way to mobilize against privacy invasion. Instead of using social media to facilitate solipsistic solutions like opting out of the service, people who care about privacy should use the tremendous coordinating power of Facebook to facilitate real change.
Inescapable network effects
Social media are powerful tools for mobilization because they allow calls for action to snowball from small clusters of people to larger audiences. Momentum can build very quickly in networks: They enable the chain reactions and feedback effects that make a collection of people dance to the same beat. Spontaneous coordination in networks, research shows, is what makes it so easy for emerging hashtags like #DeleteFacebook to swiftly gain publicity.
The same features that make networks efficient platforms for organizing campaigns, however, also make social media difficult to leave—even for ethical reasons. The value of networks is proportional to the number of users who belong to them; conversely, the cost of leaving is losing out on the ability to coordinate with a large number of people, especially when there is no viable alternative.
Individual decisions to delete Facebook accounts are, in other words, unlikely to have a broader commercial or political impact. Defectors sink into the depths and become untraceable (at least, to this particular company); but they also detach themselves from a current that gives force to collective efforts.
Collective power for collective good
For this reason, the best way to change Facebook is not to delete accounts, but to push for greater democratic control over the platform. This could take a variety of forms. One form might involve the affordance of space on its platform for user participation in decision-making regarding Facebook policy. Truly democratic engagement, in any form, will require something more ambitious than “likes.”
User data generates massive profits for Facebook, so users should have a greater say over how that data is employed. This requires greater transparency about how the platform uses our information but also opening a window to the inner logic of algorithms operating in social media. The public needs to understand how those algorithms parse information to make decisions on, for example, who gets exposed to which information; or how much of a danger the use of those algorithms poses to privacy.
In the wake of the recent scandal, Mark Zuckerberg admitted, “I actually am not sure we shouldn’t be regulated.” As many have suggested for years, Facebook has come to resemble a utility, and, if this is the case, it should be regulated accordingly.
This is not to say that democratizing Facebook is the panacea for all that ails it. People are notoriously fallible decision-makers and their actions often produce unintended consequences. For example, the spread of false news in Twitter derives more from human decisions to share certain messages than from propagandistic efforts carried out by engineered bots – algorithmic agents designed to automate tasks like sharing and propagating messages.
Similarly, while algorithmic curation restricts the range of conflicting political views that users are exposed to on Facebook, research has determined that social filtering is more important than algorithmic filtering: your choice of friends, in other words, limits exposure to challenging content to a greater extent than the platform’s automated decisions on how to present information. In both cases, human decisions are just as, if not more, consequential than algorithmic ones. It is not that users should not be given greater control over Facebook, but that this alone will not fix it.
For those concerned about privacy and transparency in the digital age, the challenge moving forward is not to get rid of social media. Rather, we should be thinking about how to channel the social media “hive” as effectively as the fictional mob in “Hated in the Nation” – only in this case, for constructive, rather than destructive, ends.