Artificial intelligence has allowed us to outsource many of the decisions we might normally make onto the backs of machines, like determining who’s in a photo or translating text into another language. That convenience has usually come with a cost: We can’t understand why those decisions are made, because machines don’t typically process information using language, the way we do.
This inability to understand our own algorithms has launched a recent crusade for algorithmic accountability, the ability to easily understand what led an algorithm to make a decision in the first place. To solve this, researchers at the Georgia Institute of Technology in Atlanta, Georgia, trained an AI to translate its decision-making process into plain English while playing Frogger.
The AI isn’t directly saying what it’s doing, but instead predicting how humans would describe its situation, according to Georgia Tech associate professor Mark Riedl, who led the project. That might sound a bit convoluted, but bear with me: If the AI were to output exactly what it was doing, the result would be long strings of numbers. So the researchers had to give it some words to use.
The team recorded and transcribed audio of humans playing Frogger, talking as they played. The team used this text, correlated it with how the algorithm saw the game as the humans played, then taught a separate algorithm how to translate what the algorithm saw to what the humans said.
So for instance, while the algorithm writes, “Looking forward to a hopping spot to jump to catch my breath,” it’s not actually looking forward to the hopping spot. It’s just in a similar scenario with a similar outcome as the human that played the game, so it’s using those words to describe its action.
It’s still a proof of concept, and Riedl says that it would have to be specifically trained again to explain doing any other task. But at least we now get some idea about how a machine thinks.