In July, Reddit announced that the famed physicist Stephen Hawking would answer questions in its popular ask-me-anything forum. However, due to Hawking’s busy schedule and difficulty responding to live questions, the answers were to be posted later in the year.
On Thursday (Oct. 8), answers to nine questions were posted. Quartz picked out the highlights below.
How long will it take to develop artificial intelligence (AI)?
There’s no consensus among AI researchers about how long it will take to build human-level AI and beyond, so please don’t trust anyone who claims to know for sure that it will happen in your lifetime or that it won’t happen in your lifetime.
When it eventually does occur, it’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right.
What happens when AI can evolve to become more intelligent?
It’s clearly possible for something to acquire higher intelligence than its ancestors: we evolved to be smarter than our ape-like ancestors, and Einstein was smarter than his parents.
If this happens (to AI), we may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails.
Why are you worried about the rise of artificial intelligence?
The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.
You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.
Have you thought of “technological unemployment,” where machines take all our jobs?
The outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.
Would an AI have these basic drives, and if not, would it be a threat to humankind?
An AI that has been designed rather than evolved can in principle have any drives or goals. However, as emphasized by Steve Omohundro [a computer scientist and expert on machine learning], an extremely intelligent future AI will probably develop a drive to survive and acquire more resources as a step toward accomplishing whatever goal it has, because surviving and having more resources will increase its chances of accomplishing that other goal. This can cause problems for humans whose resources get taken away.
What is the one mystery that you find most intriguing, and why?
Women. My PA reminds me that although I have a PhD in physics, women should remain a mystery.
What is your favorite song and movie?
“Have I Told You Lately” by Rod Stewart and Jules et Jim (1962).
What was the last thing you saw online that you found hilarious?
The Big Bang Theory