Opinion
TECHNOLOGY & INNOVATION
8 min read
Why Big Tech must be held accountable to curb social media addiction in children
A slew of cases against social media giants are invoking product liability, negligence, and consumer protection theories, placing the onus on the companies to take responsibility for their platforms.
Why Big Tech must be held accountable to curb social media addiction in children
Children play online games on their mobile phones in Jakarta. / AFP
2 hours ago

A new wave of court cases moving through US courts is challenging the long-standing legal shield that has protected major social media companies from liability. 

Instead of focusing on harmful user-generated content, these lawsuits target the platforms’ design itself. 

Plaintiffs, including individual families and state attorneys general, argue that companies such as Meta and Google deliberately engineered features like infinite scroll, algorithmic recommendations, autoplay, and push notifications to maximise engagement, knowing that children are especially vulnerable to compulsive use. 

In California, a closely watched trial alleges that Instagram and YouTube’s architecture contributed to serious mental health harms in a minor. In New Mexico, state officials accuse Meta of misrepresenting the safety of its platforms in relation to child exploitation risks.

What makes this wave distinct is its legal strategy. Rather than asking courts to police speech, plaintiffs are invoking product liability, negligence and consumer protection theories. 

They argue that platforms are not passive conduits but carefully designed attention systems, and that when those systems foreseeably harm minors, companies should be held accountable much like manufacturers in other industries. 

The outcomes could reshape how US law treats digital platforms, narrowing the practical reach of Section 230 and establishing new expectations around design responsibility, youth safeguards and corporate transparency.

For more than a decade, social media companies have described themselves as neutral platforms, open spaces where users connect, create and share. 

That framing is now under direct legal assault in US courtrooms, where a new wave of lawsuits is asking judges and juries to see these companies not as passive hosts, but as manufacturers of products deliberately engineered for compulsion.

At stake is more than financial liability. These trials could redefine how the law understands digital platforms and whether companies that design attention-maximising systems for children can be held accountable when those systems cause harm.

In Los Angeles, a closely watched case brought by a young woman identified in reporting as Kaley GM alleges that Meta’s Instagram and Google’s YouTube were designed to keep minors hooked through features such as infinite scroll, algorithmic recommendations and rapid feedback loops. 

The plaintiff’s argument goes beyond the presence of harmful content on the platform. It is that the architecture itself was built to sustain engagement in ways that predictably affect children’s mental health.

TikTok and Snap settled ahead of trial, leaving Meta and Google to defend their design decisions before a jury. 

The companies reject the claim that their products are “addictive” in a clinical sense and argue that adolescent mental health outcomes are shaped by a complex mix of social and personal factors. 

They also maintain that they provide safety tools and parental controls.

RelatedTRT World - Who should pay for the harm caused by children’s social media use?

Engineering compulsion

The legal strategy emerging in these cases signals a shift away from the long shadow of Section 230, the US law that has shielded platforms from liability for user-generated content. 

Instead of arguing that platforms should be responsible for what users post, plaintiffs are focusing on how the platforms are built. This distinction is critical.

Product liability law has long recognised that manufacturers must account for foreseeable risks, especially where vulnerable users are concerned. A carmaker cannot avoid responsibility for a design flaw by arguing that drivers sometimes make poor choices. 

A toy company cannot ignore a choking hazard because parents should supervise children more closely. The question is whether the design itself creates unreasonable risk, and whether safer alternatives were feasible.

Applying that framework to social media changes the conversation. 

Platforms do not merely host content. They rank it, recommend it and deliver it through interfaces designed to minimise friction and maximise time spent. Infinite scroll removes natural stopping points. 

Algorithmic feeds learn what provokes emotional response and serve more of it. Notifications are calibrated to re-engage users at moments of vulnerability. These features are not incidental. They are the product.

Companies’ prior knowledge of harm

The plaintiffs in the California case argue that companies knew young users were especially susceptible to such reward systems. Reporting has referenced internal research examining teen behaviour and engagement patterns. 

If companies possessed evidence that certain design choices intensified compulsive use among minors, the legal question becomes sharper: were reasonable steps taken to mitigate foreseeable harm?

Meta and Google counter that the science around “addiction” is unsettled and that correlation does not equal causation. 

They argue that depression, anxiety and self-harm among teenagers predate social media and stem from multiple influences, including family, schooling and broader social pressures.

That defence has intuitive appeal. But product liability does not require a single cause. It requires a material contribution to risk.

Courts routinely apportion responsibility where several factors combine to produce harm. 

If a design significantly increases the likelihood of injury, that can be enough. The legal inquiry is not whether social media alone explains a mental health crisis, but whether the companies’ design decisions foreseeably amplify harm to minors.

A parallel case in New Mexico sharpens this accountability lens. There, state officials accuse Meta of misrepresenting the safety of its platforms in relation to child sexual exploitation risks. 

The claim is not simply that bad actors exist online, but that the company overstated the effectiveness of its protections and withheld information about known dangers.

Regulation-innovation dilemma

Together, these cases suggest a broader recalibration of how digital harm is understood. 

For years, Big Tech has argued that the internet is too dynamic and complex to regulate through traditional liability doctrines. Courts were warned that holding platforms accountable would chill innovation and undermine free expression.

Yet these lawsuits are not seeking to police speech. They are asking whether the architecture of engagement itself can be defective.

This is where the debate becomes more than technical. It becomes structural. The business model of major social media platforms depends on maximising attention. 

More time spent translates into more data, more advertising impressions and more revenue. In that context, design features that deepen engagement are not accidental byproducts. They are central to corporate growth strategies.

If engagement drives profit and minors constitute a substantial share of user bases, the incentives to capture and retain youthful attention are powerful. 

That is precisely why product liability law exists: to counterbalance market incentives when they conflict with safety.

The industry insists that it has introduced safeguards, including screen-time reminders, content-moderation tools, and teen-specific settings. 

But critics argue that these measures often serve as secondary features layered onto systems that are still optimised for immersion. 

If the core architecture prioritises endless consumption, can optional safety settings meaningfully offset that design logic?

The answer will likely hinge on what juries make of internal documents and executive testimony. In high-profile corporate liability cases of the past, from tobacco to automotive defects, internal communications played a decisive role in shaping public perception. 

If evidence shows awareness of risk without proportional mitigation, the reputational consequences could extend far beyond a single verdict.

An international wave

It would be a mistake, however, to view these trials solely through an American lens. The questions they raise resonate globally.

Across Europe, regulators are already tightening rules around online safety and algorithmic transparency. In the United Kingdom, the Online Safety Act imposes duties of care on platforms to protect children from harmful content. 

In the European Union, the Digital Services Act introduces new obligations around risk assessment and mitigation. 

As of December 2025, people under 16 are banned from social media use, to be enforced by social media platforms. Similar measures are underway in Türkiye as well.

The US cases differ in form but not in substance. They reflect mounting frustration with a digital economy that treats attention as an extractable resource, even when the resource belongs to children.

If the plaintiffs succeed, the consequences could reshape design norms. Platforms may face pressure to introduce friction by default for minors, limit algorithmic intensity, reduce late-night notification patterns and provide clearer warnings about risks. 

Courts could signal that when companies possess granular knowledge about user behaviour, they bear corresponding responsibility.

RelatedTRT World - ‘Illusion of empathy’: Teens turn to AI for therapy, but experts warn of clear and present danger

If the companies prevail, they will likely portray the verdict as an affirmation that courts are ill-equipped to adjudicate complex social questions. The debate would then return to the legislatures and regulators.

Either way, the era of unquestioned platform immunity appears to be fading.

What these trials ultimately challenge is a foundational narrative of the digital age that technology companies are merely conduits, and that users alone bear responsibility for their choices. 

The plaintiffs are asserting a different story. They argue that behavioural engineering at scale is not neutral, especially when deployed on developing minds.

Children are not simply smaller adults navigating a marketplace of ideas. 

Neuroscience shows that adolescents are particularly sensitive to social validation and reward mechanisms. Designing systems that exploit those sensitivities, while claiming neutrality, is a position increasingly difficult to defend.

The broader lesson is not that social media should be dismantled. Digital platforms have facilitated connection, creativity and political mobilisation across the world. But innovation does not exempt companies from the ordinary obligations that govern other industries.

If a product is engineered in ways that foreseeably endanger its youngest users, society has long insisted that the manufacturer be held accountable.

The courtroom battles unfolding in California and New Mexico are therefore not abstract skirmishes over legal doctrine. They are the latest chapter in a long struggle to align corporate power with public welfare.

For Big Tech, the message is that the design of your product is your responsibility. 

For lawmakers and regulators worldwide, the message may be even clearer. The age of digital exceptionalism, where platforms stand outside traditional accountability frameworks, is drawing to a close.


SOURCE:TRT World