- NJASA
- Legal Corner Dec. 2024
-
Social Media's Shadow: A Growing Concern in Schools
School administrators in New Jersey and worldwide are increasingly aware of the detrimental impact social media can have on students. While these platforms offer opportunities for connection and information sharing, they also pose significant risks to students’ mental health, academic performance, and overall well-being. From cyberbullying and online harassment to addiction and sleep deprivation, the negative consequences of excessive social media use are becoming a pressing issue for educators and parents alike. The results of the deleterious effect on student well-being also have real costs to school districts, as schools must divert resources to enhance programs to support students’ mental health, prevent bullying, and increase security. In fact, in late November 2024, Australia approved a law banning social media platform use by children under the age of 16. While that ban will not become effective for a year, it will become the first government in the developed world to attempt to solve the problem by banning platforms for the youngest students. Whether such a ban will be effective is something governments and schools will be watching.[1]
In October 2023, New Jersey Attorney General Matthew J. Platkin, alongside 41 other state attorneys general, filed a lawsuit against Meta Platforms Inc. (hereinafter “Meta”)—the parent company of Facebook and Instagram—alleging that it exploited children and teens through addictive and harmful platform features. The coalition accuses Meta of knowingly prioritizing profit over the well-being of its youngest users, violating federal and state laws, including the Children’s Online Privacy Protection Act, New Jersey’s Consumer Fraud Act (hereinafter “CFA”), and a myriad of other states’ laws. That action was brought in the United States District Court for the Northern District of California.
According to the complaint, Meta not only disregarded the mental and physical health impacts on youth but actively designed features that kept them hooked, all while misleading the public about platform safety. This lawsuit, filed in federal and state courts, underscores the growing concern over social media’s role in the U.S. youth mental health crisis and signals a significant effort to hold tech giants accountable for their influence on vulnerable populations. It was subsequently consolidated with “hundreds of actions brought on behalf of children and adolescents, school districts and local government entities, and state attorneys general alleging that several social media companies designed their platforms to foster compulsive use by minors, resulting in a variety of harms.[2] This consolidated action (hereinafter “Consolidated Suit”) now includes Meta (Facebook and Instagram), Google (YouTube), ByteDance (TikTok), and Snapchat.
Subsequently, in early October 2024, New Jersey filed a complaint against TikTok and its parent entities and subsidiaries in the New Jersey Superior Court, Chancery Division, in Essex County.[3] The 98-page complaint alleges that the Defendants violated the CFA by engaging in unconscionable and abusive commercial practices, making false promises, engaging in misrepresentations and deception, and making knowing omissions of material fact.[4] In short, New Jersey alleges that, like other social media platforms, TikTok intentionally employed features that increased excessive, compulsive, and habitual use among young users and that it knew these actions were harmful. Because this suit was just recently filed and there have been no substantive judicial decisions in this matter, this article will focus on the above-mentioned Consolidated Suit.
In the Consolidated Suit, the social media entities filed motions to dismiss the complaint, arguing that the government entities’ injuries were too attenuated for the law to provide redress. In addition, the Court acknowledged that both the First Amendment of the United States Constitution and Section 230 of the Communications Decency Act of 1996 (hereinafter “Section 230”) provide significant limitations on the plaintiffs’ theories of recovery. Specifically, Section 230 was designed to shield tech platforms from liability for third-party content posted on their sites. However, the Court noted that negligence is a common law cause of action that provides a flexible mechanism to redress harm and that the relevant states’ negligence laws could provide redress to the alleged harm caused by social media companies. As such, the Court ruled that some of the claims against the defendants could proceed. The Court thereby conducted a feature-by-feature analysis to determine which features of the platforms were protected by Section 230 or the First Amendment and which were not.
The Court found that the following features implicate protections provided to the platforms as publishers under Section 230 or the First Amendment: failing to institute blocks to use during certain times of the day (such as during school hours or late at night), not providing a beginning or end to a user’s “feed,” publishing geolocating information for minors, recommending minor accounts to adult strangers, limiting content to short-form ephemeral content and allowing private content, timing and clustering of notifications of third-party content in a way that promotes addiction, use of algorithms to promote addictive engagement, including timing and clustering notifications to promote addictive use. Therefore, to the extent that the claims in the lawsuits against the companies alleged damages related to these features, those parts of the claims were dismissed.
On the other hand, the Court found that Section 230 and the First Amendment did not bar actions against the platforms for their failure to implement robust age verification processes, failure to implement effective parental controls, failure to implement effective parental notifications, failure to implement opt-in restrictions to length and frequency of use sessions, creation of barriers to make it more difficult to delete or deactivate accounts than to create them, failure to label content that had been edited or filtered, providing filters so users could manipulate their appearance, and failure to create adequate processes for users to report suspected Child Sexual Abuse Material (CSAM). As a result, the claims related to damages arising from these features were permitted to proceed.
In so doing, the Court held that the plaintiffs sufficiently alleged, related to the features that were not dismissed, that “the defendants proximately caused resource expenditures related to defendants’ conduct fostering compulsive use in minors, but plaintiffs fail to allege that defendants proximately caused third-party harms flowing from physical property damage, crimes, or threats transmitted on defendants’ social media platforms.”[5] As such, claims for damage and vandalism in schools could not proceed. However, the Court found that the plaintiffs did adequately allege that the defendants breached their duty of care and caused the plaintiffs to expend resources to assist minors.
This is an extremely complicated and esoteric decision that analyzes in detail many social media platform features, the companies’ marketing tactics, and the use of the platforms by minors. In summary, parts of the case were permitted to proceed because the Court found that there were credible arguments that the companies’ actions were outside the protections offered by law. However, I would caution readers to understand that the Court’s analysis is preliminary, and the plaintiffs have not yet won any part of the case. This case will proceed through further discovery, further motions, appeals, and possible trial if the parties do not ultimately reach a settlement. Therefore, members should be prepared to measure the time needed to resolve this case, and other actions brought by the State of New Jersey, in the years and not months. Unfortunately, that means no relief for school districts and members of the public will be forthcoming in the next few years unless there is a legislative solution.
NJASA members with questions about this or any other case against social media platforms should contact their NJASA attorney or board attorney for further specific advice and assistance.
[1] See, Australia Approves Social Media Ban on Under-16s, https://www.bbc.com/news/articles/c89vjj0lxx9o (BBC, November 28, 2024), see also, Sweden Mulling Social Media Age Limit to Stop Gangs Recruiting Young People, https://shorturl.at/spIpz (Reuters, December 9, 2024).
[2] In Re Social Media Adolescent Addition/Personal Injury Products Liability Litigation, MDL No. 3047, Dkt. No. 601, Case No. 4: 22-md-3047-YGR (N.D. Calif. Oct 24, 2024), slip op. at 1 (hereinafter “Social Media Products Liability Litigation”). The decision can be accessed at https://www.courtlistener.com/docket/65407433/1267/in-re-social-media-adolescent-addictionpersonal-injury-products-liability/
[3] See ESX-000228-24
[4] N.J.S.A. 56:8-2
[5] Social Media Products Liability Litigation, slip op. at 45.