As Robert Oppenheimer watched a mushroom cloud from the first nuclear detonation bloom over a New Mexico test site, he repeated a line from the Hindu epic Bhagavad-Gita: “Now I am become Death, the destroyer of worlds.” The scientist who helped build the world’s most lethal weapon saw how physicists would forever confront the consequences of their discoveries.
Today, computer scientists are contemplating their own “A-Bomb moment.” Facebook’s carelessness with user data, and attacks the company has enabled against western democracies, are on software engineers’ consciences.
“Computer science is a field which hasn’t yet encountered consequences,” writes Yonatan Zunger, a former security and privacy engineer at Google, who has compared the power in the hands of software engineers to “kids in a toy shop full of loaded AK-47’s.” Safety and ethics are still elective, rather than foundational, to software design.
Other fields have already had to reckon with such ethics. Chemistry’s discovery of dynamite and chemical weapons, and biology’s rationale for eugenics, prompted the creation of institutional review boards, mid-career certification, and professional codes of conduct. But software engineering is different. Coders are neither a profession nor a society in the traditional sense. Many are self-taught, and many have a healthy skepticism of any effort to corral the profession toward consensus. A top-down solution will not be enough.
Still, efforts are underway. The Algorithmic Justice League is rooting out bias in our programs. Former Googler Tristan Harris founded the Center for Humane Technology to “realign technology with humanity’s best interests.” Calls have gotten louder for computer scientists and coders to look more like their cousins in physics, medicine, and civil engineering.
The stakes couldn’t be higher. Technology will soon mediate every aspect of our lives, if it doesn’t already. Machines recognize speech and written text. Algorithms can already recognize your face, and the faces of millions of your fellow citizens, as well as infer from data (with increasing accuracy) your gender, income, creditworthiness, mental health, personality, and feelings (paywall).
Tech companies already obsess over reliability—gaming out the “what-ifs” to prevent computer systems from crashing. Zunger says they need to apply the same “what-if” planning to human consequences. “If you can do it without wanting to hide under a table, you’re not thinking hard enough,” he writes. “There are worse failure modes, and they’re coming for you.”