Government-backed information manipulation is here to stay—and it’s taking a turn that will challenge citizens’ ability to use the internet for collective action.
The global reaction to the Russian disinformation campaign—against Americans, Europeans, and against their own neighbors in the Baltics—has thrown our media, politics, and social media platforms into a chaotic frenzy.
We are still unfurling just how much influence the Kremlin (and others acting in concert with them) truly had on the American public. As scholars and policymakers work through their assessment and mitigation measures, it’s worth keeping in mind that the campaign targeting the US audience before, during, and after the 2016 presidential race began years ago. While brazen and large-scale, it does not represent the state-of-the-art sophistication that is currently being developed, nor the latest strategies already being used by governments around the world to disrupt democratic processes. The efforts we have seen against the US audience so far are basically a beta test for what is to come.
Tomorrow’s threats go beyond governments messing with facts or sock-puppeting fringe communities to sow division. Everyday technologies such as targeted Facebook ads and political Twitter bots have us tiptoeing towards a new normal—one in which all sorts of government-backed meddlers routinely work to cast doubt on legitimate social movements and collective actions that challenge the establishment. We must rush to prepare countermeasures for the next wave of attacks.
After the Arab Spring, the Gezi Park protests in Turkey and other net-powered democratic uprisings around the world, aspiring autocrats got the memo: to stay in power, they’d have to grapple with these large-scale, spontaneous social and political movements—and control the message around the irresistible international media stories and heroic figures they generated. As a coping mechanism, political propaganda efforts have slowly shifted from controlling information flows among citizens to directly disrupting spontaneous movements online. And the best way to do that is to learn to mimic the online uprisings they seek to squash.
Forget fake news—governments will increasingly manufacture entire fake movements. In Mexico, China, Ecuador, Turkey, and Azerbaijan (to only cite a few), quasi-governmental troll armies target government critics, human-rights activists, and journalists with vitriolic threats with the intensity and dedication offered by bot armies who don’t need to sleep at night. Their goal is to intimidate by posing as a real counter-movement. It leaves their targets thinking that most of their country is animated by a deep desire to target and hunt them. The psychological toll of surmounting yourself against an army of dedicated trolls who pose as your fellow citizens is often enough to silence even the most passionate uprisers.
Once they have created a digital shadow movement of their own, step two is to cast doubt on the original movement they oppose. You don’t need real allies to do this: Disinformation actors are creating fake organizations that call supporters to protest in the streets. After the line between real and fake has been blurred beyond immediate public recognition, anyone can claim that their opposition’s digital presence is falsely manufactured.
In May 2018, a few weeks before his re-election, Turkish president Recep Tayyip Erdoğan announced that he would step down if it was the will of the people. A large-scale online movement immediately spread on social media—#Tamam (“Enough!”)—calling for him to step down and unifying the opposition. But Erdoğan was quick to dismiss the entire movement as an army of liberal bots and trolls (with a pinch of “foreign interference” thrown in the mix). By the time digital Sherlocks and fact checkers had disproved Erdoğan’s dismissal of the movement, trolls and supporters had already spread the false claim widely.
Manufacturing evidence to support or discredit collective action has never been so easy. Casting doubt on legitimate movements, enhancing the reach of divisive factions, manufacturing false collective action: Governments’ appetite for these methods is increasing, and the arsenal of digital tools and techniques facilitating them is rapidly expanding. With increasingly sophisticated deepfakes on the horizon and an always-expanding set of new formats to consume and share materials, disproving dangerous claims before they reach critical mass is going to be an extraordinarily challenging game of cat and mouse. Events in Syria unfortunately offer a lens into such techniques, with manufactured evidence routinely being used to discredit humanitarian actors on the ground, as well as to obscure facts and war crimes surrounding the conflict.
In the very near future, a digital take on old-school double-agent tactics will be used to further erode trust in collective action. We can expect to see a new wave of “false flag operations” where the opposition plants bots and trolls within true grassroots movements in order to publicly denounce those movements as “fake.” This will further muddy the waters and discredit collective action.
Governments already have all the technology and incentives they need to keep eroding trust in online movements, to fabricate fake uprisings, and to silence and intimidate their opponents. We should worry as much about ensuring we continue to believe in true collective action as we worry about fake news.