How psychologists used a mobile game to make airport baggage screening better

A job to him, a game to you.
A job to him, a game to you.
Image: AP Photo/Lai Seng Sin
We may earn a commission from links on this page.

Baggage screening officers at airports are good at spotting common suspicious objects. They’re less good at spotting rare ones—the kind that appear only once in hundreds or thousands of bags. How to improve their success rate? Monitoring them all and giving them feedback would be very labor-intensive. A research team at Duke University in Durham, North Carolina, has devised another method, by crowdsourcing knowledge from millions of people playing a game on their mobile devices.

The researchers partnered with Kedlin, the developer of Airport Scanner, a game that simulates an airport X-ray, where players try to identify prohibited items by tapping on their screens. Kedlin provided anonymous data from more than 2 billion plays on over 7 million mobile devices from January 2013 to November 2014. (People who downloaded the game were asked if they would consent to data collection.) That gave the researchers a far bigger dataset, for far less work, than they could have amassed by monitoring real-life baggage screeners.

They found that when a target was particularly rare, players were much more likely to miss it. And when two different banned items appeared in a bag, one of them was more likely to be missed than if they were both identical. Stephen Mitroff, the lead author of the research paper (pdf) published in Journal of Experimental Psychology: Human Perception and Performance, told Quartz that the results would serve as feedback for airport security officers and radiology groups, since his team works closely with the Transportation Security Administration. But more broadly, he said, the work demonstrates the potential of mobile technology to gather large quantities of crowdsourced data for academic research that’s hard to do in a laboratory.

This kind of crowdsourcing isn’t new. Take the ESP game (pdf), an academic study that challenged people to match words to images—a task computers aren’t so good at—and eventually helped Google return better search results for online images. Researchers have also been turning to Mechanical Turk, Amazon’s platform for crowdsourced work. But Mitroff says what’s new in this study is the partnership with a specific game. Psychologists have a long history of using game-like interfaces to intrigue participants, and this experiment could encourage scientists to try existing mobile games that happen to tap into cognitive abilities, he says. It could, the report says, be applied to memory games, go/no-go games like Whac-A-Mole, and games that require the user to identify differences in side by side images.