The late physicist Stephen Hawking’s last writings predict that a breed of superhumans will take over, having used genetic engineering to surpass their fellow beings.
In Brief Answers to the Big Questions, to be published on Oct. 16 and excerpted today in the UK’s Sunday Times (paywall), Hawking pulls no punches on subjects like machines taking over, the biggest threat to Earth, and the possibilities of intelligent life in space.
Hawking delivers a grave warning on the importance of regulating AI, noting that “in the future AI could develop a will of its own, a will that is in conflict with ours.” A possible arms race over autonomous-weapons should be stopped before it can start, he writes, asking what would happen if a crash similar to the 2010 stock market Flash Crash happened with weapons. He continues:
In short, the advent of super-intelligent AI would be either the best or the worst thing ever to happen to humanity. The real risk with AI isn’t malice, but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.
The bad news: At some point in the next 1,000 years, nuclear war or environmental calamity will “cripple Earth.” However, by then, “our ingenious race will have found a way to slip the surly bonds of Earth and will therefore survive the disaster.” The Earth’s other species probably won’t make it, though.
The humans who do escape Earth will probably be new “superhumans” who have used gene editing technology like CRISPR to outpace others. They’ll do so by defying laws against genetic engineering, improving their memories, disease resistance, and life expectancy, he says
Hawking seems curiously enthusiastic about this final point, writing, “There is no time to wait for Darwinian evolution to make us more intelligent and better natured.”
Once such superhumans appear, there are going to be significant political problems with the unimproved humans, who won’t be able to compete. Presumably, they will die out, or become unimportant. Instead, there will be a race of self-designing beings who are improving themselves at an ever-increasing rate. If the human race manages to redesign itself, it will probably spread out and colonise other planets and stars.
Hawking acknowledges there are various explanations for why intelligent life hasn’t been found or has not visited Earth. His predictions here aren’t so bold, but his preferred explanation is that humans have “overlooked” forms of intelligent life that are out there.
No, Hawking says.
The question is, is the way the universe began chosen by God for reasons we can’t understand, or was it determined by a law of science? I believe the second. If you like, you can call the laws of science “God”, but it wouldn’t be a personal God that you would meet and put questions to.
Threat number one one is an asteroid collision, like the one that killed the dinosaurs. However, “we have no defense” against that, Hawking writes. More immediately: climate change. “A rise in ocean temperature would melt the ice caps and cause the release of large amounts of carbon dioxide,” Hawking writes. “Both effects could make our climate like that of Venus with a temperature of 250C.”
Nuclear fusion power. That would give us clean energy with no pollution or global warming.