An AI-powered Super Mario and Luigi can now learn from each other to beat their own game

Sassy robot Toad.
Sassy robot Toad.
Image: YouTube/AAAI Video Competition
We may earn a commission from links on this page.

The machines are starting to team up—although for now, it’s just to beat classic video games.

Researchers at the University of Tubingen in Germany released a video Feb. 1 showing a version of the Nintendo game Super Mario World, in which an AI-powered Mario, Peach, Yoshi, Toad, and Luigi (who just looks like a nauseous green Mario) can converse with each other, and work together to beat levels in the game, without any human interaction or prior knowledge of the level.

The characters’ ”brains” are controlled (much like many humans’) by four motivating desires: wealth, progress, curiosity, and health. This means they work through each level trying at various times to collect coins and power-ups, driven by the desire engine. As the video suggests, if Mario is being driven by a “strong desire for wealth,” he’ll reason out how to get more coins, figuring that if he smashes question blocks, he’ll likely get more coins. Mario then learns when his past experiences don’t match what happens: For example, if he’s after coins and smashes a block that gives him a star instead of a coin, he’ll learn that smashing blocks might not always get him more coins. When he’s fulfilled a desire, his desire engine will fire up another motivation.

At any given point, a human can ask Mario through a microphone what he’s up to, and ask him to do something else. When the other characters are in play, they can talk to each other and learn about their different abilities. For instance, Toad can watch Mario crush a block with his head and learn that it’s possible to crush them, but then find out that he’s not able to break them with his soft mushroom head. The characters can work together, asking each other to use their differing abilities to help the group complete a level together.

Beating games is a form of applied AI research that helps show that, in the future, AI could be used to solve general problems, rather than specific tasks. Researchers at Google DeepMind built a system last year that could beat many classic Atari video games, and last month, it announced one of its AI programs had managed to beat Go, a game many thought would take decades to crack. The research at Tubingen, which the group calls “Social Mario,” builds on work the team released last year, where Mario could learn to beat levels on his own. Much like with humans, it’s a lot easier to solve problems when a few more brains are working on them.

The researchers built the system to show how AI systems could operate in a social capacity with humans, according to Gizmag. ”Any type of intelligent support system would benefit from such social capabilities,” Fabian Schrodt, one of the developers on the project, told Gizmag.

But considering the Mario they’ve created is so matter-of-fact about the way it thinks about killing its enemies (“If I jump on Goomba, then it certainly dies”), it’s worrying to consider how an AI system like this might interact with humans. Hopefully the researchers haven’t shown future AI robots how to work together to overthrow humanity.