Throughout artificial intelligence’s 70-year history, the field has had only one breakthrough, argues investor and former head of Google China Kai-Fu Lee: deep learning.
“So why you might ask, why do we see all these headlines about AI doing cancer diagnosis, beating [humans at] Go, beating [humans at] chess, and doing all kinds of amazing things?” he said speaking at an Oct. 9 event for Quartz and Retro Report’s “What Happens Next” project. “The reason is these are mere applications that were run on top of the one breakthrough.”
The deep-learning breakthrough happened in 2012, when two parallel ideas merged during an AI competition called the ImageNet challenge.
Princeton professor Fei-Fei Li had spent years collecting and organizing images with the idea that showing algorithms more data was more important than crafting the perfect learning algorithm. At the University of Toronto, professor Geoff Hinton and Ph.D students Alex Krizhevsky and Ilya Sutskever used Li’s data to supercharge their neural networks, a fringe idea at the time that took inspiration from how the brain used distributed neurons to form larger ideas.
The Toronto team entered the ImageNet challenge with their neural network, and became the first team to break 75% accuracy in the competition. The world took notice. Starting the following year, every winning team used this neural network approach, called deep learning. Google hired Hinton, Sutskever, and Krizhevsky, and soon deep learning was everywhere.
Lee thinks that this breakthrough will remain AI’s biggest for years to come. “Do not think that this is going to be a renaissance age with a zillion discoveries,” he said. “There was one discovery and lots of applications built on that discovery.”