This story incorporates reporting from Business Insider, Windows Report, newsbytesapp.com, Computerworld, promocionar and cnbctv18.
LinkedIn has been sued by its Premium customers for allegedly disclosing personal information to train AI models without their consent. The lawsuit, filed Tuesday in the U.S. District Court for the Northern District of California, proposes a class-action status on behalf of millions of LinkedIn Premium users.
The suit alleges that LinkedIn, starting in August 2024, introduced a privacy setting that automatically enrolled users into a data-sharing program aimed at training AI models. The plaintiffs argue that this move was made without adequate user consent and was disguised under a privacy policy update on Sept. 18, allowing the platform to access and use private messages and other user data for AI development purposes.
According to the lawsuit, LinkedIn’s Premium customers, particularly those who exchanged messages using the platform’s InMail feature, claim that their confidential information was shared with third-party entities to assist in AI development. The legal filing suggests that LinkedIn’s actions were not only intentional but were also kept from the users, contravening the original promise of using personal information solely for platform enhancements.
LinkedIn, which is owned by Microsoft, has called the allegations baseless. A company spokesperson dismissed the accusations as unfounded, maintaining that LinkedIn adheres to strict privacy standards to protect user data.
The plaintiffs, who are seeking unspecified financial damages, have indicated that if successful, each participant in the case may be awarded $1,000 in compensation. This legal challenge reflects a broader trend of increasing scrutiny on how tech companies handle user data, especially with the burgeoning use of generative AI tools across various sectors like finance, retail, and more.
The lawsuit also raises significant questions about user consent and transparency. It puts a spotlight on the industry’s practices of leveraging vast datasets, often compiled through seemingly innocuous user interactions, to enhance AI capabilities.
Recent years have seen growing pressure on major tech firms to ensure transparency and obtain explicit consent from users before using their data for purposes beyond the immediate scope of service. As AI continues to integrate more deeply into corporate strategies, balancing technological advancement with ethical considerations around data privacy remains a critical challenge.
Should the court rule against LinkedIn, it may set a precedent in how user consent for data usage must be approached. It would also underscore the necessity for tech companies to clearly communicate privacy policies and opt-in mechanisms for data usage.
Quartz Intelligence Newsroom uses generative artificial intelligence to report on business trends. This is the first phase of an experimental new version of reporting. While we strive for accuracy and timeliness, due to the experimental nature of this technology we cannot guarantee that we’ll always be successful in that regard. If you see errors in this article, please let us know at qi@qz.com.