FICO is letting anyone download anonymized credit applications for AI research

FICO is trying to make home financing more accurate.
FICO is trying to make home financing more accurate.
Image: Reuters/Neil Hall
We may earn a commission from links on this page.

FICO, the credit rating firm central to Americans’ financial lives, wants to add some AI power to credit decisions without sacrificing transparency.

The company announced yesterday (Dec. 7) the Explainable Machine Learning Challenge, a call for AI developers to take a portion of the company’s data and build a new, more transparent algorithm to predict whether customers would be able to repay a line of credit up to $150,000. The challenge is in partnership with Google, University of Oxford, Imperial College London, MIT, UC Berkeley, and UC Irvine.

Representatives from FICO stress that the challenge isn’t to replace the algorithm that the agency uses or imply it’s broken, but instead generate new ideas for how decisions by deep learning algorithms can be explained. Ostensibly, the firm would then use any worthwhile ideas from the competition.

To help developers, FICO is releasing an entirely new trove of data, called the Home Equity Line of Credit (HELOC) dataset. It includes real, anonymized HELOC applications requesting lines of credit ranging from $5,000 to $150,000, as well as ancillary data like available collateral. From this data, FICO is asking developers to generate a risk score: What’s the probability someone will fail to make a payment over 24 months?

While the field of artificial intelligence is booming, companies like FICO have not been able to tap into advancements due to regulatory restrictions and expectations from customers. The traditional algorithms used by FICO provide an explanation for every credit score or decision: You were denied because it hasn’t been enough time since opening your last line of credit. Deep learning, the popular flavor of AI that has allowed Facebook, Google, Amazon, and Microsoft to offer virtual personal assistants and image recognition tools, offers greater accuracy in many tasks, but at the expense of explainability. This is called the “black box” of deep learning; AI researchers test their algorithms and know they work, but aren’t 100% sure why one decision is made over another.

“The black box nature of machine learning algorithms means that they are currently neither interpretable nor explainable,” FICO wrote in a blog announcing the competition. “Without explanations, these algorithms cannot meet regulatory requirements, and thus cannot be adopted by financial institutions.”

Details on the competition are still vague. It’s not clear whether there is a reward for AI developers’ work, or the time frame of the challenge.

Update: This post was updated to include additional comments from FICO representatives about the aims of the competition.