It has become an obsession du jour to discuss how Big Tech should be governed to make it more responsible and ethical—and for good reason. Privacy breaches happen almost on a daily basis, algorithms become increasingly biased towards gender and race, and tech companies are accused of socializing losses and privatizing profits by displacing local communities around the world.
Two basic approaches characterize these discussions: The first treats governance as a matter of designing a better technology product for the user—an idea propagated by Facebook founder Mark Zuckerberg. The other views it as a matter of passing antitrust laws that empower the consumer—a major component of US senator Elizabeth Warren’s presidential bid.
The citizen, however? They’re left completely powerless in both “the technical fix” and “the antitrust fix.” What’s needed is a politics of technology centered around citizens, not users and consumers.
The technical fix boils down to the belief that the intractable problems of technology can be solved by redesigning platforms and refining algorithms—i.e., with more technology. Historian of science Evgeny Morozov refers to this as “technological solutionism.”
Coming straight out of Silicon Valley, the technical fix was recently expressed by CEO of Facebook, Mark Zuckerberg, in his vision for Social Networking. Here, Zuckerberg presented a range of encryption tools and safety measures that he says build ethics into Facebook. While he claims to recognize that “as a society, we have an opportunity to set out where we stand, to decide how we value private communications,” his vision quickly gets narrowed down to a future which is already pre-set: “If we do this well, we can create platforms for private sharing that could be even more important to people.”
Ultimately, Zuckerberg doesn’t only proclaim to fix Big Tech’s problems through tech itself—he suggests increased use of his platform as a cure for the problems it created. Not surprisingly, the technical fix enjoys a great hype among tech companies, but it also frequently turns up in recommendations of international organizations and among policy-makers and academics.
In turn, the antitrust fix identifies unfair competitive advantages in the tech industry as the main culprit. Its proponents aim to achieve ethics through antitrust policies, nourished by scholarship that figures corporate competition as an instrument for ethical markets.
US Democratic presidential candidate Elizabeth Warren is a fan of this approach. She recently launched her vision of how to force tech companies to become more ethical and responsible. From her perspective, breaking up new tech monopolies and promoting more competitive markets is the best way of complying with consumer concerns: “More competition means more options for consumers and content creators, and more pressure on companies like Facebook.”
But while seemingly opposing the limited politics of the technical fix, Warren still falls back on a similar predetermined future:
“I want a government that makes sure everybody—even the biggest and most powerful companies in America— plays by the rules. Such a vision proposes that more competition will not only protect consumers from privacy abuses, but ensure “that the next generation of great American tech companies can flourish.”
The antitrust fix is a perfect mediator between state regulation and market growth, but it essentially precludes a third player—citizens—from influencing the rules that govern them.
While creating more ethical algorithms and posing legal challenges to unfair competition are both admirable endeavors, they only scratch the surface of a much larger complex of what we might call “the politics of technology.”
It is easy to fall for the engineer’s dream, in which the world consists of problems and solutions: “solving life’s dilemmas one app at a time,” as Apple’s marketing slogan would have it. Sweet dream indeed, but it does not render technologies unproblematic or devoid them of political meaning. What is worse, it leaves us in a constant and somewhat perplexed mode of risk control instead of seizing the moment to develop a genuine space for democratic deliberation.
The governance of technology should acknowledge that technologies will not stop raising thorny political questions. Engaging with the politics of technology necessitates that we “stay with the trouble,” to borrow a phrase from the scholar of science and technology, Donna Haraway. Moving from fixing to staying with the trouble of technology gives rise to more crucial questions: Do we want digital products and services to be an ever-increasing part of our lives and societies? How far-reaching should their hold on our lives be, and with which instruments should they be governed? Who is to decide about these questions? Corporations or governments alone, or all of us as a society?
As technologies are always already political, they should be at the center of public reasoning, not relegated to CEO fantasies and election campaign maneuvers. Current approaches pretend to open up a larger debate about the role and place of digital technologies in our societies. In fact, they close it down by reinforcing the Big Tech’s master narrative of a determined future where technology guarantees individual flourishing and collective well-being.
To take the politics of technology seriously means to care about citizens and their rights to imagine, create, and inhabit a livable and desirable world. “The issue, in other words, is no longer whether the public should have a say in technical decisions, but how to promote more meaningful interaction among policy-makers, scientific experts, corporate producers, and the public,” as Harvard professor Sheila Jasanoff puts it.
Hope, however, is not futile. While there is no ready-made solution to these quandaries, some jurisdictions—like the European Union—have already made attempts to integrate the politics of technology into deliberative processes. One such example is the long-discussed General Data Protection Regulation (GDPR) that grants private citizens an active role in its enforcement, paving the way for new forms of advocacy. Another emerging mechanism is the European Citizens’ Initiative, a direct democracy tool that enables 1 million EU citizens from at least seven EU countries to call on the European Commission to propose legislation on matters where the EU has competence to legislate. One recent initiative, started by citizens concerned with the use of toxic pesticides, has successfully increased the transparency in scientific assessments carried out by the European Food Safety Authority, diminishing the influence of the pesticide industry.
Admittedly, such examples are only small steps toward a rich and robust politics of technology. Nonetheless, they show an alternative way of governing technology that doesn’t rely on ineffective algorithmic or legal fixes. Unfortunately, this way has been obscured by a narrow focus on pressuring companies, who have a mixed track record, to think and act in more ethical and responsible ways. Instead of concentrating on the power of corporations, we should be more concerned with the power of the people to mobilize their own visions and concerns about technologies.
Despite contributions from ethicists and policy-makers so frequently hired in tech today, the corporate vision of technology is somewhat trite and unimaginative. It’s always predetermined: Facebook is still providing the infrastructure for social interaction and Amazon is still delivering your material needs, including food and news. Perhaps in a slightly better version of themselves, but they are still there.
A more democratic vision of technology is, however, not preconditioned in the same way. Here the companies are not necessarily playing the role that they would like to play, and, even more radically, perhaps they are not playing any role at all. Among all the possibilities for technology governance on the table, one thing is for sure: We, the citizens, will have to stay with the trouble of tech.