Facebook and Intel built killer AIs that dominated a “Doom” video game competition

The AI hunter tracks its prey.
The AI hunter tracks its prey.
Image: VizDoom
We may earn a commission from links on this page.

It’s pretty much universally agreed that we shouldn’t give AI-powered robots rocket launchers and set them loose to battle each other to the death. Luckily, we can recreate that in a video game.

Facebook and Intel took home 1st place prizes today (Sept. 22) in VizDoom 2016, a competition to see who can create the best “Doom”-playing AI. The 1993 video game, in which a player fights the scourge of hell, has become a touchstone for AI research for its simple 3D maps and potential for different styles of play.

Facebook, which competed under the team name F1, took home the (figurative) gold medal in the competition’s first challenge, where AIs are armed with rocket launchers on a map they’ve seen before. The goal was simple: kill each other. They could heal themselves with Medikits, and gather more ammunition. The Facebook team won 10 of 12 matches.

Intel won the more complex second challenge, where the AI was dropped into a map it had never seen before. Alternate weapons were scattered across the map, so the bots didn’t only have to contend with unfamiliar terrain, but the decision of how different weapons would affect the potential outcomes of the match. Intel also won 10 of the 12 games, but by a much larger margin than the Facebook win.

Pretty much every modern video game claims to have AI, which is used as a blanket term for characters programmed into the game that seemingly act autonomously. However, they’re programmed to have certain behaviors, act within certain limits, and don’t learn.

The AI in the VizDoom competition don’t work that way. They play the same way any human would: by looking at the screen, processing what’s happening, figuring out the best strategy to win, and then actually controlling the character to achieve that goal. Each competitor’s bot is built on the VizDoom platform, software that standardizes how the AI can interact with the game. The teams tinker with specific approaches on how to learn, but mostly rely on variations of reinforcement learning, where they set the bot loose to play the game and fail, each time learning slightly more about how to win.

But even the bots that didn’t win brought interesting ideas to the table. Clyde, created by University of Essex PhD candidate Dino Ratcliffe, preferred spawn-camping, according to a feature on the competition by Engadget. Spawn-camping is the act of shooting players just as they re-appear after dying in the game. Unfortunately, Ratcliffe’s computer broke a day before the competition, and a severely under-trained version of Clyde had to compete.

The underprepared bot, who only finished 40% of training, took 3rd place.