What will it take for companies to trust AI chatbots with sensitive data?

Salesforce thinks it can make AI chatbots safer with its Einstein Trust Layer

We may earn a commission from links on this page.
Salesforce CEO Marc Benioff speaks on a darkened stage with his hands outstretched
It’s a layered argument.
Photo: Jemal Countes (Getty Images)

At a recent dinner with OpenAI’s Sam Altman, Marc Benioff, Salesforce’s CEO, got a mind-blowing demonstration. After hearing him speak for just a few seconds, one of the artificial intelligence developer’s large language models (LLMs) was able to clone Benioff’s voice and produce audio of him delivering an entire speech once given by John F. Kennedy.

Benioff says he asked Altman to show him where the file with his voice was stored. But there was no file. The AI generated output by weighting” data in the hidden layers of its neural network; there was no way to see where the data lived.

Advertisement

Relaying the anecdote at a June 12 analyst and press event in New York, where he revealed details of Salesforce’s new AI strategy, Benioff didn’t seem particularly concerned. But he thought his customers would be—especially those in regulated industries like banking. “They like to know exactly where that data is...and they want to audit those places,” Benioff said.

Advertisement

AI and the pitfalls for data privacy

If you’ve heard about teachers and school administrators freaking out about AI chatbots like OpenAI’s ChatGPT or Google’s Bard, it’s because they’re worried students will simply offload their work onto the technology.

Advertisement

In the corporate world, where the quest for productivity gains is never-ending, the idea of AI taking over tasks is a more attractive one. What executives and boards and in-house legal teams are mostly freaking out about, though, is whether employees using AI chatbots will undermine data privacy mandates.

They’re right to be concerned. Maintaining data privacy—by keeping personally identifying information secure, for example—is more difficult when a chatbot invites people to share all manner of information with it, so that it might draft an email, write a report, or fill out a customer service script tailored as closely as possible to the end recipient. The reasons for sharing such information may be benign, or even admirable. But that doesn’t make it any less dangerous for a company where data privacy is essential to trust.

Advertisement

And that’s not the only risk for companies making use of AI. Hallucinations, bias, toxicity—as Benioff noted, “those are not societal terms; those are actual, technical explanations of things happening inside these models.”

Salesforce is pushing its trust layer

Little wonder, then, that Salesforce is making trust and data privacy a big part of its AI pitch to business customers. The company’s new “Einstein GPT Trust Layer” promises to separate sensitive customer data from the LLMs used in the creation of generative AI.

Advertisement

Salesforce has made the promise of trust a key piece of its proposition since its founding in 1999, first with the controlled sharing of information in its original customer relationship management systems, and again with the 2016 advent of its private predictive training models, in which Salesforce platforms scoured customer data for patterns and used those patterns to make forecasts, all without compromising the data itself.

Private generative training raises similar needs, Benioff says. But in the rush to make use of generative AI, “I think you’re going to see companies who have not taken proper precautions lose data inadvertently,” he said.

Advertisement

If that’s a subtle plug for Salesforce’s new AI “starter pack” (priced at $360,000 a year with an annual contract), it’s also suggestive of a future in which data breaches multiply. Only, instead of companies accidentally leaking customer data to hackers, they will be exposing it to chatbots. It’s a reminder that generative AI is new territory for everyone—Salesforce included.