In March, Zoom changed a small section of fine print in its terms of service, asserting all “right, title, and interest” to the data generated from user calls. The California-based videoconferencing company wants to use this data to train and improve its new artificial intelligence features.
Zoom has been rolling out its “Zoom IQ” generative AI features throughout the year, most recently launching auto-generated call summaries in June. The company has its own language model, but it also integrates those built by OpenAI, maker of the popular ChatGPT bot, and Anthropic, creator of the chatbot Claude. Announcing these partnerships in May, Zoom boasted that it uses a “federated” approach to AI.
But Zoom is now facing new backlash over this policy, which surfaced over the past few days on social media. In response, Smita Hashim, Zoom’s chief product officer, wrote in a blog post on Aug. 7 that while the company retains the ability to manage its data and make changes to its systems “without questions of usage rights,” it won’t do so without user permission.
Zoom promised to be transparent about how it trains its models on “service generated data,” but still insisted that it retains the rights to use the data any way it wishes. The company has also been unclear about what kind of user generated data it exerts these rights over. Zoom wants to have it both ways—reassuring users that they are in control while maintaining its rights in the fine print.
Critics upset with Zoom’s expansive claims on their data have already announced that they are looking for alternative platforms. One employee of Bellingcat, the investigative journalism enterprise that often deals with confidential data and interviews, said they would be canceling their pro Zoom account.
In response to the criticism, Aparna Bawa, Zoom’s chief operating officer, clarified that users do, in fact, have the final say over whether or not Zoom can use their content “for product improvement purposes.”
“This is opt-in,” Bawa wrote on Hacker News. “A new user starting to use Zoom today does not have this turned on by default.” The company “does not use audio, video, or chat content for training our models without customer consent,” she added.
Despite Zoom’s claim that user data on its platform is protected through end-to-end encryption, the Federal Trade Commission found in 2020 that it “misled users” and was “engaged in a series of deceptive and unfair practices” regarding its own data security. In 2021, the company agreed to pay $85 million to settle charges that it illegally shared user data with Google, Meta, and LinkedIn.
Zoom has also been criticized for failing to prevent Zoom-bombing, where unauthorized people gain access to an otherwise private call, and for letting troves of account credentials get sold on the dark web.