We’re inching closer to a reality where our cars drive themselves and we just relax in the back as they ferry us to our destinations. But we’re not quite there yet: Many of the automakers and Silicon Valley companies striving to get self-driving cars onto the road have said the technology is still likely at least a half-decade away from feasibility. Even public tests like Uber’s autonomous taxis in Pittsburgh can only travel on certain roads that the company’s engineers have meticulously mapped out.
Uber and most others are working on solving essentially the same problem: how to map out roads and the various obstacles (turns, people) and rules (stop lights, speed limits), and then teach self-driving cars to accurately apply all of that information as they maneuver city streets and highways. But it seems that chip manufacturer Nvidia has a better idea: Why not just have the car learn to drive like a human? In other words, let the car figure out how to react to each new situation as we did when we were teens behind the wheel.
Nvidia released a video Sept. 28 showing a person riding in the front seat of a car, hands stuck out the window, as the car drives itself around traffic cones and other obstacles with ease. But in a blog post earlier this year, the company explained that it didn’t program the car to be able to interpret lane markings, road signs, or specific obstacles. Instead, the company trained a neural network (running on Nvidia hardware, of course) to drive, using video footage recorded from a camera strapped to a car driven by regular humans around California.
The computer was then dropped into a Lincoln sedan strapped with laser sensors. The self-driving Lincoln was able to navigate around real New Jersey streets (which, in itself is no easy feat), extrapolating from the knowledge it had gained from the sample videos. The car was also able to drive in the rain and the dark with apparent facility.
“Our engineering team never explicitly trained the CNN [the neural network] to detect road outlines.” Nvidia’s blog post said. ”Instead, using the human steering wheel angles versus the road as a guide, it began to understand the rules of engagement between vehicle and road.”
According to Nvidia’s video, it only took about about 20 example trips at different times of the day for the system to be able to drive in all sorts of conditions. Given that the system was trained in CA and can navigate NJ streets, it seems that, theoretically, the system could drive anywhere in a country once it understands the basic rules of the road there. (That assumes, however, the test data it learned from—the human driver footage—doesn’t include the human doing anything they shouldn’t be doing, like rolling through stop signs or not stopping at red lights.)
A Nvidia spokesperson told Quartz that the video was just for research purposes, and the company isn’t planning to develop this particular technology for the roads just yet. It is, however, already partnering with a range of companies working to create self-driving cars, including Audi, Tesla, Volvo, and Mercedes-Benz. “With deep learning, the vehicle can be trained to have superhuman levels of perception, driving safer than anyone on the road,” the spokesperson told Quartz, but right now, that would require a “supercomputers in the cloud,” which are not currently being pursued.
But if the technology can be developed and trained with as little information as Nvidia says it could be, and it’s already as reliable as a human, it could potentially one day surpass our own driving abilities. All it would need is enough computing power put behind it and to sit and endlessly learn from its own videos learn until it’s perfect. At that point, perhaps, the idea of riding around in a self-driving car won’t feel as off-putting as it currently does.
Correction: An earlier version of this post said that Nvidia tested its cars in New Jersey and drove them on Californian streets. The opposite was actually the case.