While big tech companies might not face regulation of their artificial-intelligence efforts in the US, banks trying to use AI still have to contend with reams of industry-specific rules, including laws ensuring equal treatment of customers.
According to Bank of America technology executive Hari Gopalkrishnan, that’s a problem for banks interested in using deep learning, the technology responsible for the current AI boom. That’s because the decisions made by deep learning can be difficult to interpret—the “why” behind everything the algorithm does is a bit of a mystery.
In banking, “[w]e’re not fans of lack of transparency and black boxes, where the answer is just ‘yes’ or ‘no,’” Gopalkrishnan said at a company tech summit. “We want to understand how the decision is made, so that we can stand behind it and say that we’re not disfavoring someone.”
Gopalkrishnan isn’t the first bank executive to make this observation. Adam Wenchel, vice president of machine learning and data innovation at Capital One, told MIT Tech Review last year that one big hurdle for banks using AI comes from laws that require an explanation for when someone is denied a loan or credit card. Those laws, like the Equal Credit Opportunity Act, mean that banks have to prove they’re not discriminating based on traits like gender or race. With opaque AI having a history of bias against minorities, that’s a tall order.
Banks can still use deep learning for some applications, like trading assets, since little explanation is required for trading. But both Bank of America and Capital One have AI researchers working on how to make the opaque technology transparent enough for wider use.