For over a decade, the world’s most powerful tech companies have insisted on a simple idea, platforms are neutral, and users are in control.
This week, a jury in Los Angeles decisively rejected that premise. According to the media reports, Meta and Google were found liable for designing social media platforms that contributed to addiction and harmed a young user’s mental health. The decision is being widely described as the first of its kind and possibly the beginning of a much larger reckoning for Big Tech.
At the center of the trial was a now 20-year-old plaintiff, who began using YouTube as a child and later Instagram in her pre-teen years. What followed, according to the lawsuit, was a pattern that has become increasingly familiar in the digital age: compulsive scrolling, algorithmic reinforcement, and a gradual decline in mental wellbeing.
The legal argument didn’t hinge on any single video or post. Instead, it zeroed in on the mechanics behind the experience features like infinite scroll, autoplay, and recommendation engines that keep users engaged, often longer than they intend.
In doing so, the case carved out a new legal pathway. Traditionally, tech platforms have been protected under Section 230, which shields them from liability over user-generated content. But here, the focus shifted from content to design from what users see to how they are kept watching.
That shift may prove to be the most consequential part of the verdict.
After weeks of testimony, the jury concluded that both companies were negligent in how they designed their platforms and failed to adequately warn users about potential harms. Damages were awarded, and responsibility was split placing a larger share on Meta.
But beyond the financial outcome, the symbolic weight of the decision is what’s reverberating across industries. For years, concerns about social media addiction have been debated in academic papers, documentaries, and policy circles. Now, a jury has translated those concerns into legal accountability.
In effect, the court acknowledged something many users have long felt but struggled to prove:
that these platforms are not just habit-forming they may be intentionally engineered to be so.
Almost immediately, comparisons began to surface drawing a line between today’s tech giants and the tobacco industry of decades past. Like cigarettes, social media platforms are free to access, widely consumed, and deeply embedded in daily life. And like tobacco companies once did, tech firms are now being accused of understanding the risks of their products while continuing to optimise for growth and engagement.
Critics argue that features such as endless feeds and predictive algorithms are not accidental innovations; they are deliberate design choices rooted in behavioural psychology.
The public response has been anything but uniform. For many parents and advocacy groups, the verdict feels like validation, an acknowledgment of concerns that have been raised for years about the impact of social media on young minds. There is a growing call for safer design standards, particularly for platforms used by children and teenagers.
Legal experts, meanwhile, are viewing the case through a more strategic lens. By targeting product design rather than content, the plaintiffs may have opened the door for a wave of similar lawsuits. Thousands of cases are already in the pipeline, and this verdict could shape how they unfold.
At the same time, critics worry about unintended consequences. Some argue that holding platforms liable in this way could blur the boundaries of responsibility, potentially affecting free speech protections or placing unrealistic expectations on tech companies to control user behaviour.
It’s a tension that sits at the heart of the digital age, where does user choice end, and platform responsibility begin?
As per BBC reports, both Meta and Google pushed back against the ruling, making it clear they intend to appeal the decision. Meta maintained that the issue is far more complex than the case suggests, stating: “Teen mental health is profoundly complex and cannot be linked to a single app. We will continue to defend ourselves vigorously as every case is different, and we remain confident in our record of protecting teens online.”
Google, meanwhile, distanced YouTube from the broader social media debate. A spokesperson said: “This case misunderstands YouTube, which is a responsibly built streaming platform, not a social media site.”
However, for many advocates and families, the verdict signals a turning point. Speaking to BBC, Ellen Roome who is currently suing TikTok following the death of her son described the moment as long overdue, calling it an “enough was enough” moment.
Yet the broader implications of the case may be harder to dismiss. Even as companies defend themselves in court, they are increasingly under pressure from regulators, investors, and the public to rethink how their platforms operate. Over the past few years, both firms have introduced features aimed at improving digital wellbeing. But critics argue these measures are incremental, not structural.
For years, the conversation around social media addiction has lived in opinion pieces, research papers, and personal anecdotes. Now, it has entered the legal system. The outcome of this case may still evolve through appeals. But one thing is already clear: the era of unquestioned platform neutrality is ending.



