Economists and management experts have begun to model what might happen when robots are smart enough to do our jobs for us. Erik Brynjolfsson and Andrew McAfee argue in their book claims the transition to robot labor will be “more transformative than the Industrial Revolution.”
Why worry about this issue now? A commonly cited reason is that a couple more decades of exponential increases in computing power (dubbed “Moore’s Law“) will give us the computational power of the human brain. Hence, the robot revolution is just around the corner.
The trouble with this argument is that, as economist Robin Hanson reminds us, artificial intelligence “takes software, not just hardware.” We’ve had the computing power of a honeybee’s brain for quite a while now, but that doesn’t mean we know how to build tiny robots that fend for themselves outside the lab, find their own sources of energy, and communicate with others to build their homes in the wild.
Purveyors of the “Moore’s law hence AI” argument know this, but they downplay its significance. In a recent article for Mother Jones, Kevin Drum spends at least 10 paragraphs—and one very nice animated graphic—on Moore’s Law, but crams four different caveats for his argument into a single paragraph, with just one sentence devoted to the caveat about software difficulty. He concludes: “True artificial intelligence will very likely be here within a couple of decades.”
Not so fast. Forecasting AI is more complicated than that. Artificial-intelligence software progress might slow down as it has for many AI subtasks already, or we might see breakthroughs that improve the efficiency of particular methods by 20 orders of magnitude. As AI draws near, governments might regulate AI development to avoid mass unemployment. Alternatively, an AI “Sputnik moment” that publicly demonstrates the real possibility of AI could spur an AI race between world powers. Moore’s law might come to an end, or quantum computing could take flight.
It is wise to examine how AI technologies might develop, and how we might ensure they have a positive impact. (I run a research institute devoted entirely to this important problem.) But as we draw up our plans for navigating the future of AI, let us not pretend to know more than we do.
We welcome your comments at email@example.com.