Stephen Hawking was a physicist and cosmologist whose power lay in his ability to communicate complex theories to the masses. In his latest book, the first published since his death in March, he goes broad.
Hawking’s Brief Answers to the Big Questions is a slim book out today (Oct. 16), in which Hawking summarizes his viewpoints to questions like “Will We Survive on Earth?” (probably not), and “Is Time Travel Possible?” (it can’t yet be ruled out.)
In one chapter, titled “How Do We Shape the Future?”, Hawking disagrees with the notion that humans are at “the pinnacle of evolution.” Human endeavor, in his view, has no boundary. He instead sees two options for humanity’s future:
First, the exploration of space for alternative planets on which to live, and second, the positive use of artificial intelligence to improve our world.
Throughout his book, Hawking is pessimistic about the future of humans on Earth. Political instability, climate change, and the possibility of nuclear violence make continuing on Earth untenable, he writes. He advocates in more than one chapter for the colonization of space—the moon, Mars, or an interstellar planet.
He’s also wary of AI, if and when it surpasses human intelligence. “The advent of super-intelligent AI would be either the best or the worst thing ever to happen to humanity,” he writes. “The real risk with AI isn’t malice but competence.” He argues for policymakers, the tech industry, and the general public to seriously examine the ethical repercussions of AI.
At the same time he urges for the further development of AI for the greater good: in brain-computer-interface technology and human gene editing, for example.
“When we invented fire, we messed up repeatedly, then invented the fire extinguisher,” he writes. “With more powerful technologies such as nuclear weapons, synthetic biology, and strong artificial intelligence, we should instead plan ahead and aim to get things right the first time, because it may be the only change we will get.”