Star Trek has inspired many technologies we now take for granted. Communicators, touch screens, replicators, ultra-powerful computers, and artificial intelligence all feature prominently throughout the franchise’s history. Today we thumb through endless scrolling feeds tailored to our whims and enjoy a level of technical sophistication not previously imaginable. We may not be able to fling ourselves around the cosmos at the push of a button yet, but we are on the verge of causing a profound change in the very fabric not of space-time, but of our culture.
The crew of the Starship Enterprise had a host of what we today would call “exponential technologies” at their disposal, such as the ability to travel faster than the speed of light, total control over matter and energy, and general artificial intelligence. When applied responsibly, these innovations enabled a transition to a post-scarcity society where the acquisition of wealth and economic growth were no longer the primary drivers. Here on Earth, the narrative is somewhat different. Our pursuit of economic growth has brought relatively swift advances in computing and robotics, which is causing growing inequality and the erosion of our social fabric, leading to who-knows-what kind of dystopian outcome.
Perhaps then, it’s time to examine implementing another Star Trek-style innovation?
The Prime Directive is a non-interference clause in Starfleet’s set of ethical principles designed to protect less-advanced cultures (such as ours) from the unpredictable and likely damaging effects of interacting with a much more advanced society. This isn’t an original trope, and can be found throughout our collective history in the forms of the Bible’s ten commandments, the medical industry’s hippocratic oath, and Asimov’s three laws of robotics, among others. All these standards are comprised of basic, understandable rules that we apply to our daily interactions to prevent an otherwise rapid and bloody descent into chaos. To paraphrase a certain pointy-eared science officer, it would be highly illogical to allow our enthusiasm for novelty to run amok without safeguards.
Today’s Adderall-powered, minimum-viability obsessed, tech-bro-saturated innovation landscape is predicated on a complete lack of any safeguards. “Fail fast, fail often” and “move fast and break things” are rallying cries among those who seek to make a quick buck with whatever of-the-moment technology solution they can sell. When the snake oil we bought ends up exploding in our faces, we’re met with shrugs and sheepish grins accompanied by a chorus of “we didn’t know” and “we couldn’t predict” non-apologies. It’s happened before, and it will happen again.
That’s not to say that some organizations aren’t trying to institute some kind of ethical regulation: The World Economic Forum came up with the top nine ethical issues in artificial intelligence, a list of concerns around employment, equality, bias, machine-enhanced interactions, and other consequences of AI in our daily lives; the IEEE Standards Association has set up a body called the Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems; and Harvard economist Richard B. Freeman published “Who Owns the Robots Rules The World,” a document outlining how workers can benefit from owning part of the robotic capital that replaces them. They’re good starts to be sure, but they’re not even close to any meaningful policy.
Beyond industry and academia, what concerns should ordinary people have with the unforseen and unexpected effects of exponential technologies? What of the billions at the tender mercies of well-financed but badly designed algorithms, propped up by dubious heuristics? Rigged elections and monumental data breaches aside, it seems to be business as usual. Nothing short of a cataclysm is likely to shake us out of our tendency to seamlessly integrate whatever convenience-as-a-service is sold to us by a cut-rate, black-box advertising bot. What combination of circumstances will rouse a somnambulant public from the Skinner boxes we have willingly climbed into?
If we were to use Star Trek as an example, perhaps a global nuclear holocaust followed by a couple of generations of brutal struggle in a diseased, lawless hellscape would do the trick. Silicon Valley can cook up a way to monetize ourselves to borderline extinction—better yet, let them build an AI to do it for us! It would only require a small nudge in the right direction to set off whatever geopolitical time bomb is ticking away in some as-of-yet unknown part of the world.
The collective disregard for anything outside our algorithmically enhanced social filters keeps us trapped in a mirror universe where we indulge our most basic instincts (compassion and the preservation of life not necessarily among them). Can we prove that we are not, as omnipotent alien and perennial thorn in Captain Picard’s side Q describes, a “dangerous, savage, child race”? Does the political and social will exist to replace greed with beneficence as the motivation for technical progress?
Work is indeed slowly being done to determine the requirements needed to initiate a global shift in our relationship with exponential technologies, and the likelihood of a technological Prime Directive may not be outside the realm of possibility. It will be hardest on those who live through the challenge, but seeking new worlds is something us humans are exceedingly good at. If setting out some ground rules keeps us moving forward, then perhaps the future is somewhere we can boldly go.