LinkedIn seems to be in legal trouble following allegations of mishandling user data. A lawsuit filed in a California federal court accuses the platform of sharing the private messages of Premium users with third-party companies to train AI models without proper consent.
The complaint, brought on behalf of millions of LinkedIn Premium users, states that the platform introduced a privacy setting in August 2023. This setting allegedly opted users into data sharing by default, allowing external companies to use personal information, including private messages, for AI training purposes. The lawsuit claims LinkedIn concealed these changes a month later through an update to its privacy policy, which stated user data could be disclosed for AI-related activities.
A LinkedIn spokesperson has strongly denied the allegations, calling them false and baseless. Nevertheless, the lawsuit highlights concerns about how the platform handles sensitive information and its transparency regarding privacy policies.
What Changes Were Made to Privacy Settings?
The legal filing points to subtle yet impactful adjustments LinkedIn made to its privacy settings and user agreements. The platform reportedly added a feature in its privacy settings that allowed users to opt out of sharing their data. However, the lawsuit claims this option was not properly communicated, and users were automatically enrolled in the data-sharing programme unless they manually opted out.
Adding to the controversy, LinkedIn’s FAQ section allegedly stated that while users could choose to stop sharing data, it would not reverse the use of information already shared or used in AI training. This, the lawsuit argues, shows a lack of genuine control for users over their data and shows intentions to minimise public backlash.
Critics suggest that as much as these updates were about policy adjustments, its also also an attempt to legitimise actions that may have already violated user trust. The lawsuit describes these measures as part of a pattern aimed at covering up data sharing practices.
More from News
- Trump Lifts Sanctions in Syria: What Does This Mean For Syrian Businesses?
- Retail Cyber Attacks: Cartier And North Face Are The Next Retailers Affected
- A Look At The Different Technologies Volvo Is Bringing To Its Cars
- Klarna Launches Debit Card To Diversify Away From BNPL
- T-Mobile Now Has Fibre Internet Plans Available For Homes
- Bitdefender Finds 84% of Attacks Use Built In Windows Tools, Here’s How
- Japan Starts Clinical Trials For Artificial Blood Which Is Compatible With All Blood Types
- UK Unicorn Monzo Breaks £1 Billion in Revenue
What Are The Lawsuit’s Demands?
The lawsuit wants to hold LinkedIn accountable for allegedly violating user privacy and contractual agreements. It requests $1,000 per user under the US federal Stored Communications Act, which protects the privacy of electronic communications. It also demands compensation for breaches of contract and violations of California’s unfair competition law.
The legal filing represents LinkedIn Premium users who sent or received InMail messages and whose private data may have been disclosed for AI training. The proposed class-action suit could involve millions of users, as LinkedIn’s Premium membership is a popular choice among professionals. In 2023, the platform reported $1.7 billion in revenue from its Premium subscriptions.
The lawsuit also claims that LinkedIn deliberately failed to meet its promises to protect user privacy. This accusation centres on the platform’s alleged change from using personal data solely for improving its services to allowing external parties to exploit the information.
What Could This Mean For LinkedIn And Its Users?
LinkedIn’s global user base exceeds 1 billion, with nearly a quarter of these users located in the United States. The allegations, if proven, could harm the platform’s reputation and more will be curious about how technology companies handle personal data.
For Premium users, the lawsuit could be the start to holding platforms accountable for unclear or deceptive practices. Privacy advocates argue that such cases point out the need for stronger regulations to protect individuals from unauthorised data sharing.
LinkedIn, for its part, insists that these claims are without merit. In an email to users, the company stated that data sharing for AI purposes was not enabled in the UK, the European Economic Area, or Switzerland.