Scientists built an AI robot that’s figuring life out just like humans do

There are so many precious moments in a newborn’s life that parents love to capture on film: The first time their child sits up on her own, the first time she stands, her first cautious steps. Igor Mordatch, a robotics post-doctorate student at the University of California, Berkeley, has been doing similarly for a humanoid robot, called Darwin, which he programed to learn just like a human child might.

Mordatch and his team at Berkeley’s Robotics lab started out by working for two years on a computer system that simulates how a robot might act in certain situations. The system is a group of neural networks—computer algorithms modeled after the structure of a human brain. In the last few months, his team has been transferring that system over into Darwin itself. There, the simulations act like a game-plan that Darwin can use to figure out how to perform tasks on its own—much as a child sees other people walking and figures out, gradually and with lots of mistakes, that she can do it too.

“The neural networks act as a map, a way to make decisions,” Mordatch told Quartz. Darwin has multiple sensors that feed data to the neural networks—the position of its limbs, the amount of pressure on its feet, the load on its joints, for instance—and the system outputs what actions the robot should be taking. “The robot only knows where it is, where it wants to be, and the the neural networks output the actions it should take to keep achieving the action it wants to do,” Mordatch added.

Right now, Mordatch is working on taking the data from Darwin’s walking tests and feeding that back into the simulations to make them more accurate. The goal is to create a machine-learning system that could theoretically allow Darwin to wander around on its own. Mordatch said that the team is working towards being able to potentially have Darwin walking around the Berkeley campus on its own in January (presumably with a handler explaining the situation to passers-by), and tackling more complex tasks, like recognizing and picking up objects, in June.

Although we’re starting to see more and more robots entering homes and workplaces—from vacuum cleaners to robot butlers—most of them tend to be shaped more like trash cans or other blocky objects. Scientists and engineers have shied away from building robots that walk, like us, on two feet, as it’s very hard to keep them from falling over.

Mordatch points out, however, that we have constructed an environment in which human-shaped beings thrive, so it makes sense to make human-shaped robots. While we’re still a way from having our own versions of Star Wars’ C-3Po, Mordatch believes that programming robots to figure out how to get around using neural networks and simulated experiences will become more commonplace. That will distinguish them from robots programmed explicitly to complete a task a specific way, as Mordatch says Boston Dynamics does with its Atlas robot. (Quartz has asked Boston Dynamics to confirm this; when the robots were used at this summer’s DARPA Robotics Challenge, they were only semi-autonomous, sometimes remotely controlled by humans.)

In the future, Mordatch said, he’d like to see robots go places that humans can’t, or shouldn’t—like toxic-waste facilities, or other hazardous environments. But right now, that’d be like learning how to run before you can walk—which Darwin is still struggling with.

home our picks popular latest obsessions search