Bill Gates hosted a Reddit Ask Me Anything session yesterday, and in between pushing his philanthropic agenda and divulging his Super Bowl pick (Seahawks, duh), the Microsoft co-founder divulged that he is one in a growing list of tech giants who has reservations when it comes to artificial intelligence.
In response to Reddit user beastcoin’s question, “How much of an existential threat do you think machine superintelligence will be and do you believe full end-to-end encryption for all internet activity [sic] can do anything to protect us from that threat (eg. the more the machines can’t know, the better)??” Gates wrote this (he didn’t answer the second part of the question):
I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.
As Mashable points out, that doesn’t mean Gates is shying away from AI. During the AMA he also talked about advances in personal computing that use AI, such as a Microsoft project called Personal Agent that works across devices to “remember everything and help you go back and find things and help you pick what things to pay attention to.”
As other outlets have observed, Gates is joining a few other prominent figures who have advocated public caution in developing AI. Tesla CEO Elon Musk, who has continuously acted as a mouthpiece for AI concerns, recently pledged $10 million to the Future of Life Institute, to fund research grants investigating potentially negative implications.
For his part, Musk has been more outspoken and dramatic about his concern:
“[I]n the movie “Terminator,” they didn’t create A.I. to—they didn’t expect, you know some sort of “Terminator”-like outcome. It is sort of like the “Monty Python” thing: Nobody expects the Spanish inquisition. It’s just—you know, but you have to be careful.”
Like Gates, Musk has indicated that he believes in the benefits of AI, up to a certain point. In a video about his donation he said that “the greatest benefits from AI would probably be in drudgery…or tasks that are mentally boring, not interesting.”
Stephen Hawking has added to the clarion calls. The physicist co-wrote a piece for the Independent with Max Tegmark, the Future of Life Institute’s co-founder, and two others last year on the dangers of AI. Hawking is an obvious supporter for some uses of artificial intelligence: The Intel system that helps him speak to compensate for having Lou Gehrig’s disease uses AI, for example, to predicts his next words, as he explained in an interview with the BBC. But in that interview he also warned of the danger that artificial intelligence could overtake humans:
“The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence it would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”