At Silicon Valley’s inaugural Comic Con, we gave a talk called “Superbabies vs. AI.” Astro, who is captain of moonshots at Alphabet’s X division, argued that genetically engineered babies are going to destroy civilization as we know it. He sees the horror of eugenics, X-Men, and a planet entirely populated by the sort of kids who beat him up in middle school, all rolled into one. Danielle, a physician-scientist and wife of said captain of moonshots, argued that the robot apocalypse is going to annihilate humanity. Super intelligent computers will eventually destroy us all, no matter what sort of Asimovian instructions we try to give them. The jury is out about who won the debate, but here are the most important issues we explored.
Will highly evolved AI break into banking systems and steal all of our money or send drones to kill us all?
It’s not likely that AI will ever resemble a human super villain. As an analogy, while airplanes and birds can both fly, they are not otherwise similar, and neither is better at all aspects of flying. Likewise, computers are already much better than humans when it comes to memory and calculations, but they can’t manage a three minute conversation with a barista at Starbucks.
Even if we could build an AI that is similar to humans but smarter, there’s no evidence that being smart correlates very well with being a super villain (except in movies, of course). Hitler didn’t wreak havoc on the planet because he was the smartest person in the world. He was good at manipulating people’s emotions and taking advantage of a moment in history, which are innately human skills. Much of the harm we do to other people and to the planet is a mark of our stupidity, not our intelligence.
There’s also the issue of motivation. What would inspire an AI to seize the world’s money or kill us all? It isn’t likely to be programmed to be a greedy curmudgeon. We like to project human desires onto machines, but an artificially intelligent system isn’t interested in buying a superyacht so that it can get all the supercomputer babes, and it doesn’t have a use for our 401Ks.
The most common doomsday scenario imagined by science fiction writers is that robots programmed to perform useful tasks, such as cleaning houses, will decide that the most efficient way to fulfill their duties is to get rid of us—it’s easy to keep vacant houses clean. For something like that to happen, a whole series of very low probability events would need to take place. The robots would need to be able to decide that it’s not their job to clean, but to prevent mess; they would have to understand the concept of death and how to cause it to happen; they would need to decide that disposing of bodies is better than washing dishes; they would need to have no safeguards against harming people; they would have to be physically endowed with the means to kill. Cleaning robots aren’t likely to come with standard issue lethal laser beams, and even if all of those things happened, it wouldn’t be the end of humanity. Seven hundred people are killed by toasters each year, and that hasn’t stopped us from bringing toasters into our homes.
The application of CRISPR (clustered regularly-interspaced short palindromic repeats) technology to genetic engineering has raised the specter of designer babies once again—but superbabies aren’t coming anytime soon. CRISPR is not ready for use in human embryos, and it may never be good enough. Chinese researchers tried to repair one single defective gene in nonviable human embryos, and their results were dismal, with poor efficiency and lots of off-target effects. That’s not even the real stumbling block, however. A much bigger issue is that we can’t define traits like beauty and intelligence, and even if we could, we have no idea what genes make people smart, or attractive, or star soccer players.
Even a simple, easily measured trait like height is not well understood from a genetics perspective. Over 400 gene regions have so far been discovered to influence height, and that accounts for a mere 20% of the heritability. There are probably thousands of genes that determine height, and we don’t even know what most of them are yet. Of the gene regions that have been identified, many don’t have a known function. Some have functions you might predict, like bone growth or collagen metabolism, and some, like mTor, have dozens of known functions, many of them life-critical. No parent is going to let scientists muck around with thousands of his or her baby’s genes in the hope of having a tall child.
The situation for complex traits like intelligence is far more complicated. Nobody has yet identified “smart genes.” There are thousands of genes that contribute to brain development, and many of them have multiple functions, but not one of them is an “intelligence” gene.
If one day we figure out how to engineer babies to be smarter, won’t rich people be able to buy success for their children?
Intelligence is only partially inherited, so it can never be guaranteed by genetics. Even if there were such a thing as “intelligence genes,” there are a lot of non-genetic factors that go into determining how smart people turn out to be. Money, on the other hand, is 100% heritable. So if we want a more level playing field, we could start with a robust estate tax.
George W. Bush didn’t get to be president because he’s a supergenius, and there are geniuses growing up in neighborhoods like Compton right now who will end up getting shot instead of becoming president. Raw intelligence is not the primary factor that determines who becomes successful in our society. It’s probably not even one of the top 10 factors.
The economy is always in a state of flux. In the 1800s, 80% of the labor force worked on farms; today it’s 2%, but we don’t have 78% unemployment. Entirely new industries may continue to spring up and offer new employment opportunities. Ironically, “smart manufacturing,” which is partly AI, is touted by politicians on the right and on the left as critical to saving American manufacturing jobs. If AI makes businesses more efficient, contributing to growth of the economy, there will be more money to invest in new ventures. There are probably going to be entirely new sectors of the economy in 100 years that we can’t even imagine right now. It’s possible that total employment will fall, but economic growth will continue as we’re able to produce more with less.
It’s not even clear that falling employment would be a bad thing. Only 13% of people worldwide actually like going to work. Most people don’t like their jobs and wish that they could spend more time with family and friends and on hobbies. If everyone were guaranteed a base income, then people could spend their lives doing the things they love instead of the things they’re told to do for money.
This question comes up when we imagine frightening future scenarios, but as technology becomes more familiar, it doesn’t seem scary or sinister anymore (even though sometimes, as in the case of guns, it probably should). At its core, artificial intelligence is just a fancy way of counting. The label “AI” gets assigned to the parts of the field of computer science that don’t work yet. Once the technology can fly planes or trade stocks or read CT scans, we don’t call it AI anymore. We call it computer vision, or path planning, or expert systems. Nobody thinks of autopilot or CT scanners as evidence that we’re playing God.
The same applies to biological sciences. Louise Brown, the first test tube baby, was called Frankenbaby, and people said that she should be kept in a toilet bowl or a fish tank. There was a lot of hand-wringing about the slippery slope leading to designer babies. Now in-vitro fertilization is commonplace, and those early fears and calls for a moratorium seem almost quaint.
If we did genetically engineer babies, couldn’t genetic mistakes sweep through human populations like wildfire?
There are 4 million babies born each year with genetic diseases. It’s right to be concerned about adding more genetic disease to the mix, but that concern needs to be balanced against the possibility of curing those diseases. For the parents who watch their kids suffer and often die of diseases that are caused by a single mistake in a single gene, the fear of potentially introducing an unforeseen genetic change doesn’t seem very scary. Also, the many genetic diseases that already exist haven’t managed to wipe out humanity yet; there’s no good reason to think that a new man-made genetic disease would be worse.
It seems that we are poised to make big breakthroughs in genetic engineering and AI. What is the timeline for advancement?
We are already in the midst of great breakthroughs, they just might not look exactly as we have imagined. We are a very long way from creating an artificial intelligence that resembles humans, and it may be that there is no compelling case for even trying to make such a thing. The robots that are useful to us today are highly specialized and bear little resemblance to C-3PO. Self-driving cars are robots that can save millions of lives every year. Mars rovers are exploring a distant planet. Robots are performing surgery and dispensing cash to us when the banks are closed. Advances in AI may not resemble science fiction, but they may turn out to be an important part of the solution to immediate dangers faced by humanity, like climate change.
Genetic engineering is a critical tool for biomedical research; by disabling or altering genes in cells, we discover the uses of those genes. Gene editing has given us insulin to treat diabetes, vaccines to protect our children, chemotherapy to cure cancer. It’s unlikely that we will ever be able to make designer babies, and even if we could, there is not much chance that they would alter the future of humanity. If we made 1,000 “superbabies” today, in 300 years, when large parts of the planet may be uninhabitable due to global warming, there would only be about 30,000 possible carriers of those genes. Meanwhile, breakthroughs in genetic engineering can give us tools to cope with and combat the effects of climate change. A rice crop with low methane emissions may not sound as exciting as superbabies, but it’s more likely to impact the future of humanity.
Doomsday scenarios about the robot apocalypse and mutant babies don’t have a high likelihood of coming about, and meanwhile, humanity is facing a lot of serious and immediate challenges. Instead of focusing on hypothetical, far-fetched, worst-case-scenario fears, we should be looking for social and technological solutions to the problems we already have. Environmental destruction and human strife have wiped out infinitely more civilizations than robots and superbabies combined. And who knows, if we get really lucky, we might have a Professor X or Iron Man’s J.A.R.V.I.S. one day. That wouldn’t be so bad, would it?