Companies increasingly rely on artificial intelligence to automate crucial tasks. Machines with brains work alongside humans in warehouses, make recommendations about who should get credit, triage patients seeking care, and analyze dizzying quantities of financial data.
Lately, corporate boards have begun to worry about the ethical ramifications of turning so much power over to the machines. A study of 2,737 executives released this month by consulting firm Deloitte found that a majority of those who used AI in their business reported “major” or “extreme” concerns about ethical risks.
“Even 18 months ago, ethics was not as much a part of the conversation as it is today,” said Beena Ammanath, who leads the AI Institute at Deloitte. Before that, she said, “the conversation would be, ‘How do we get more value from AI and monetize it?’ Now ethics is coming up earlier and earlier in the discussion.”
Some of board members’ newfound concern can be attributed to a string of high-profile AI failures that have mired prominent companies in scandal. Within the last few years, self-driving cars from Uber and Tesla have crashed and killed passengers and a pedestrian; Amazon scrapped an AI resume screener biased against women; Microsoft debuted a Twitter bot that began spewing profane hate speech within 24 hours; and IBM disavowed its facial recognition research, citing concerns about surveillance, racial profiling, and human rights violations.
Steven Mills, who leads Boston Consulting Group’s AI efforts, said this stream of negative headlines has seeped into the corporate psyche: “It has created an environment in which people on boards are saying, ‘Wait a minute, we’re using AI. Could something like this happen to us?’”
These anxieties are starting to show up in surveys of C-suite executives in charge of their firms’ AI efforts. The Deloitte survey of AI executives found that more than half have concerns about ethics, liability, and the consequences of using personal data without consent. A similar report from analytics firm FICO found that 60% of AI executives reported feeling top-down pressure from their boards to beef up their efforts on ethics—compared to just 26% who felt bottom-up pressure from their customers.
But the mounting pressure hasn’t fully translated into robust efforts to create responsible AI. The same Deloitte survey found just a third of AI execs were creating ethics policies or oversight boards to do something about their concerns. Less than half the executives in the FICO survey said they had created an ethical AI framework, diversified their development teams, or established steps to combat bias in their model building process.
“Interest is increasing, but we’re starting from a surprisingly low baseline,” said McKinsey technology consultant Michael Chui.
Some industries are further along than others. In finance and healthcare, for example, executives have been more careful to ensure that they can explain their algorithms’ decisions to comply with longstanding regulations. And some companies, like Microsoft, Google, and Salesforce, have publicly woven AI ethics into their brand identities. But most firms are just starting to confront these challenges.
In the meantime, Deloitte’s survey found that 56% of companies said they are slowing adoption of AI tools because of the risks involved. But that doesn’t mean businesses are turning their back on AI. “I can see a shifting of emphasis in the near term toward less fraught areas like manufacturing, and a more deliberate and slower pace toward HR, for example, as they navigate some of these issues,” said BCG’s Mills.
Anand Rao, a PwC consultant who has worked on AI since 1985, points out that the technology has seen its share of boom-and-bust hype cycles. “No one wants to see another cycle of AI winter, and that’s probably one reason why there’s been an increased focus on managing the risks of AI,” he said. “Hopefully this time around, we can stave off too much of a bubble, and also too much of a deflation after the bubble.”