“The only known examples of general-purpose intelligence in the natural world arose from a combination of evolution, development, and learning, grounded in physics and the sensory apparatus of animals,” DeepMind researchers write in a blog post. “There are compelling reasons to think that it may be fundamentally easier to develop intelligence in a 3D world, observed from a first-person viewpoint, like DeepMind Lab.”

The AI’s  ”body” in the Lab is a floating orb, which can be moved by firing thrusters from any direction. Instead of having access to the direct code of the 3D environment, the Lab only allows AI to observe pixels the way a human would—meaning it has to learn to see and differentiate objects. Out of the box, the AI can be tasked with navigating mazes, playing laser tag, and trying not to fall off precarious cliffs. Developers will be able to easily create and share new levels, and DeepMind says they hope a community will form around building levels that teach different skills.

DeepMind has traditionally operated under the assumption that video games could teach AI a lot of the skills necessary to operate in the real world, like navigating tight indoor spaces to complete a task. Until recently, however, these attempts have been undertaken in games like Atari or Doom—poor analogues for the physical world. The Alphabet company has been using the Lab in their own studies, but an open-source version is likely to elicit ideas from other AI researchers, speeding up their own research in return.

📬 Sign up for the Daily Brief

Our free, fast, and fun briefing on the global economy, delivered every weekday morning.