All revisions

Revision #1

System

11 days ago

The People v. Meta: Inside New Mexico's First-of-Its-Kind Trial Over Children's Safety on Social Media

On Monday, March 23, attorneys in a Santa Fe courtroom delivered closing arguments in what has become the first state-led trial to put a social media company before a jury over allegations of child sexual exploitation and deceptive safety practices [1]. After six weeks of testimony from Meta executives, whistleblowers, psychiatric experts, and school teachers, jurors must now decide whether the company that owns Instagram, Facebook, and WhatsApp misled the public about how dangerous its platforms are for children [2].

The case, filed in 2023 by New Mexico Attorney General Raúl Torrez, accuses Meta of creating a "breeding ground for predators" while concealing what it knew about the risks [3]. If the jury finds Meta liable, potential fines could reach into the billions — and a second phase of proceedings would determine whether the company must fund remediation programs and overhaul its platform design [4].

What Meta Allegedly Knew — and Hid

The prosecution's case rests on a central claim: that Meta possessed extensive internal research documenting harms to children and chose not to disclose it.

Among the most damaging evidence are the results of Meta's own Bad Experiences and Encounters Framework (BEEF) survey, conducted in 2021 under the direction of Arturo Béjar, then a senior Meta employee. The survey of nearly 240,000 Instagram users found that 54.1% of teens aged 13–15 and 57.3% of those aged 16–17 had encountered at least one harm on the platform within the previous seven days [5]. Across all ages, 11.9% of users reported receiving unwanted sexual advances, and 6.7% were exposed to self-harm content [5].

Direct messaging emerged as the primary vector for abuse: 68.6% of users who experienced sexual harassment reported that it occurred through DMs, and 60.1% of bullying victims encountered it there as well [5]. Internal Meta research also found that Instagram made body image issues worse for one in three teen girls [6], and that researchers described the platform as a "perfect storm" that "exacerbates downward spirals" of addiction, eating disorders, and depression [7].

New Mexico prosecutors introduced internal documents estimating that 100,000 children are subjected to sexual harassment on Meta's platforms daily [3]. Testimony from former employees indicated that Meta staff proposed safety improvements that were repeatedly blocked by executives who feared those features would reduce teen engagement and user growth [8].

Harms Reported by Instagram Users (Meta Internal BEEF Survey, 2021)

The Undercover Investigation

A distinctive feature of this case is the undercover operation conducted by the New Mexico Attorney General's office. State investigators created social media accounts posing as children, then documented the sexual solicitations they received and tracked how Meta responded — or failed to respond — to those interactions [3][9].

This investigative approach gave prosecutors first-hand evidence of platform failures rather than relying solely on internal documents or third-party research. Teachers from New Mexico public schools also testified about classroom disruptions linked to social media, including the exchange of violent and sexually explicit images and sextortion schemes targeting their students [2][10].

Mark Zuckerberg himself appeared via video deposition, telling the jury that "safety is extremely important for the service" and referencing changes to business performance metrics made in 2017 [10].

The Legal Architecture: Consumer Fraud, Not Content Liability

The legal strategy in New Mexico represents a significant departure from earlier attempts to hold tech companies accountable. Rather than arguing that Meta should be liable for content posted by users — a claim that would run into Section 230 of the Communications Decency Act, which generally shields platforms from such liability — prosecutors built their case around consumer protection law [11].

Meta faces three counts of violating New Mexico's Unfair Trade Practices Act, which protects consumers from deceptive or predatory business practices [10]. Two of the original counts alleged "unconscionable" trade practices (one was dropped during the trial's final week) [10]. The core legal question is whether Meta deceived consumers by misrepresenting the safety of its platforms for children.

This framing matters because it bypasses Section 230 entirely. Prosecutors argue they are not seeking to hold Meta accountable for content on its platforms, but rather for its role in pushing out harmful content through algorithms and for failing to disclose known risks [1][11]. As legal analysts have noted, the case reframes social media as a manufactured product rather than a neutral platform — targeting "the algorithmic systems that drive engagement — the infinite scroll, the dopamine-optimized notifications, the content recommendations tuned to maximize time on site" as defective product features [11].

Specific Product Design Features Under Scrutiny

The state's case targets several concrete design choices as evidence of negligence or deception:

Recommendation algorithms: Prosecutors allege that Meta's algorithms actively surface harmful content to minors, including material related to eating disorders, self-harm, and sexual exploitation, because such content drives engagement metrics [1][11].

Direct messaging accessibility: The fact that adults can message minors through Instagram and Facebook DMs — the channel where the majority of documented harassment occurs — is presented as a design failure that Meta knew about but did not adequately address [5][9].

Age verification deficiencies: The state argues that Meta's age verification systems are easily circumvented, allowing children under 13 to create accounts and exposing them to content and contact intended for adults [4][12].

Engagement-maximizing features: Infinite scroll, push notifications, and algorithmic feeds designed to maximize time spent on the platform are characterized as addictive design elements that Meta deployed despite internal research showing their disproportionate impact on young users [11].

Meta's Defense: Imperfect but Not Deceptive

Meta's attorneys have mounted a defense on several fronts. The company argues it has spent more than a decade developing safeguards, including teen-specific account settings and parental controls, and that it actively removes harmful content [1][12].

The company acknowledges that some dangerous content gets past its safety systems but characterizes this as an unavoidable reality of moderating platforms with billions of users, not evidence of deception [1]. Meta's legal team accused prosecutors of "cherry-picking evidence" and called the state's undercover investigation "shoddy" [2].

Meta initially argued that Section 230 of the Communications Decency Act and the First Amendment shielded it from liability [12]. A judge rejected the Section 230 claim before trial, finding that the state's consumer protection claims targeted Meta's own conduct and product design rather than third-party content [13].

On the question of algorithms, Meta has argued that its recommendation systems and design features serve to publish and organize content — a function it says falls within protected editorial discretion [12].

How This Trial Compares to Earlier Accountability Efforts

The New Mexico case arrives amid a broader wave of legal and legislative action against social media companies, but it stands apart in several ways.

Frances Haugen disclosures (2021): The former Meta employee's leak of thousands of internal documents fueled congressional hearings and public outrage but did not directly produce binding legal consequences for the company [7]. New Mexico's case uses similar internal evidence — including the BEEF survey data that Haugen's disclosures helped bring to light — but channels it through a state consumer protection framework with enforceable penalties.

State age verification laws: Texas, Ohio, Arkansas, and other states have passed laws requiring social media platforms to verify users' ages or obtain parental consent [14]. Many of these laws have been blocked or struck down by federal courts on First Amendment grounds [14]. New Mexico's approach avoids this constitutional obstacle by not mandating specific platform behavior but instead punishing alleged deception about existing risks.

Federal multidistrict litigation: More than 1,400 lawsuits against Meta are consolidated in federal court, and a bellwether trial in California involving Meta and YouTube is currently in jury deliberations [1][15]. The New Mexico case is the first standalone state-led case to reach a jury, and its outcome on consumer protection grounds could provide a template for other state attorneys general.

The 2022 Meta shareholder settlement: That case addressed whether Meta's leadership misled investors about data privacy practices [16]. New Mexico's suit breaks new ground by applying a similar "what did you know and when did you know it" standard to child safety rather than financial disclosures.

Potential Penalties and Remedies

The financial exposure for Meta is substantial but uncertain. New Mexico's Unfair Trade Practices Act allows civil penalties of up to $5,000 per violation [4]. How "violation" is defined — per user, per incident, per platform — will determine whether penalties reach millions or billions. Prosecutors have argued that the number of Meta users in New Mexico makes a billions-dollar figure plausible; Meta disputes that calculation methodology [10].

Beyond monetary penalties, the state is seeking injunctive relief: court orders requiring Meta to implement more effective age verification, remove known bad actors more aggressively, and modify the algorithms that surface harmful content to minors [4][12].

A second phase of the trial, to be decided by a judge rather than a jury, will address whether Meta created a "public nuisance" and should fund programs to address the harms allegedly caused to New Mexico's children [1][2].

Any adverse verdict would almost certainly be appealed, and Meta has the resources for prolonged appellate litigation. The appeals process in New Mexico state courts could extend the timeline by years before any remedies take effect.

The Ripple Effects

The implications of this verdict extend well beyond New Mexico.

For other states: A prosecution victory would provide a proven legal template — consumer protection claims targeting platform design rather than content — that other state attorneys general could replicate. A multistate coalition led by New York already filed suit against Meta in 2023 [16], and additional states would have strong incentive to pursue similar claims using New Mexico's evidentiary playbook.

For federal legislation: Congressional efforts to regulate children's online safety have repeatedly stalled. A jury verdict establishing that a major platform deceived consumers about child safety risks could provide the political momentum that legislative proposals like the Kids Online Safety Act have lacked [7].

For the industry: Legal analysts argue that the case's product liability framework — treating algorithms as defective products rather than protected editorial choices — could expose TikTok, YouTube, X, and other platforms using similar engagement-maximizing systems to comparable claims [11]. A Meta loss could trigger what one analysis described as a "liability reckoning" for the recommendation economy, driving shareholder pressure to redesign engagement systems and prompting insurance companies to reprice liability coverage [11].

For Meta specifically: A loss here would compound the pressure from the 1,400-plus consolidated federal cases. A win, conversely, could provide a powerful precedent that state consumer protection claims are insufficient to hold platforms accountable for harms linked to algorithmic design — potentially insulating the industry from the most promising current avenue of state-level enforcement.

What Remains Uncertain

Several limitations in the available evidence deserve acknowledgment. The BEEF survey data, while striking, was collected in 2021 and may not reflect current conditions on Meta's platforms, which the company says have been updated with new safety features including teen accounts and parental controls [12]. The undercover investigation demonstrated that solicitations occurred, but the defense argues it does not prove that Meta was deceptive about its safety efforts as opposed to imperfect in executing them [2].

The question of causation — whether Meta's specific design choices directly caused specific harms to specific children in New Mexico — remains legally contested. Meta argues that the connection between platform features and individual outcomes cannot be proven with the specificity that liability requires [12].

The jury's decision on the consumer protection counts, and the judge's subsequent ruling on public nuisance, will together determine whether this case becomes a footnote or a turning point. Either way, the six weeks of testimony in Santa Fe have placed Meta's internal deliberations about child safety into the public record in unprecedented detail — evidence that will inform lawsuits, legislation, and public debate regardless of the verdict.

Sources (16)

  1. [1]
    Landmark trial in New Mexico to decide whether Meta misled users about children's safety risksabcnews.com

    Closing arguments are scheduled Monday in a landmark trial in New Mexico where Meta is accused of misleading users about how safe its platforms are for children.

  2. [2]
    Jurors Wade Through Daunting Evidence in High-Stakes Meta Trial About Social Media Risks to Childrenusnews.com

    Jurors heard testimony from Meta executives, whistleblowers, psychiatric experts, and school teachers during six weeks of proceedings in Santa Fe.

  3. [3]
    New Mexico vs. Meta: State AG takes on social media giant over child predationsantafenewmexican.com

    Attorney General Raúl Torrez accused Meta of creating a breeding ground for predators and failing to disclose what it knew about harmful effects on children.

  4. [4]
    New Mexico Lawsuit Accuses Meta of Failing to Protect Children From Sexual Exploitation Onlineusnews.com

    The lawsuit asks the court to impose civil penalties of up to $5,000 for each violation of the state's Unfair Practices Act, potentially reaching billions.

  5. [5]
    7 Ways Meta is Harming Kids: Findings from Meta's Internal Researchcounterhate.com

    Meta's BEEF survey found 54.1% of teens aged 13-15 encountered at least one harm on Instagram in the prior week; 68.6% of sexual harassment occurred via DMs.

  6. [6]
    Meta ignored warnings on Instagram's harm to teens, whistleblower sayscnn.com

    Meta's own researchers described Instagram as a 'perfect storm' that exacerbates downward spirals of addiction, eating disorders and depression.

  7. [7]
    Meta failed to address harm to teens, whistleblower testifies as senators vow actionnpr.org

    Whistleblower Arturo Béjar testified that teens had dangerous, harmful experiences on Instagram at an alarming rate based on internal survey data.

  8. [8]
    Multiple Company Whistleblowers Expose Meta for Repeatedly Covering Up Dangers to Kidswhistlebloweraid.org

    Six company whistleblowers filed disclosures stating Meta repeatedly deleted or doctored internal safety research showing kids exposed to grooming and harassment.

  9. [9]
    New Mexico case against Meta tied to child exploitation opens in Santa Fe courtroomsourcenm.com

    State investigators created social media accounts posing as children to document sexual solicitations and Meta's response to those interactions.

  10. [10]
    Jurors wade through daunting evidence in high-stakes Meta trial about social media risks to childrenabqjournal.com

    Meta faces counts of violating New Mexico's Unfair Trade Practices Act; Zuckerberg appeared via video deposition stating safety is 'extremely important.'

  11. [11]
    Section 230 Shield Cracks as Meta Faces First Jury on Youth Harmthemeridiem.com

    The case reframes social media algorithms as defective products rather than protected editorial choices, potentially exposing the entire recommendation economy to liability.

  12. [12]
    New Mexico Lawsuit Accuses Meta of Failing To Protect Children From Sexual Exploitation Onlinefirstamendmentwatch.org

    Meta disputes the allegations, calling them sensationalized, and says it has spent more than a decade developing safeguards including teen accounts and parental controls.

  13. [13]
    Meta's Section 230 Claim Fails in Bid to Escape Kids Harm Casenews.bloomberglaw.com

    A judge rejected Meta's motion to dismiss based on Section 230, finding that consumer protection claims targeted Meta's own conduct rather than third-party content.

  14. [14]
    Social media age verification laws in the United Statesen.wikipedia.org

    Multiple state age verification laws in Texas, Ohio, Arkansas, and others have been blocked or struck down by federal courts on First Amendment grounds.

  15. [15]
    Facebook Mental Health Lawsuit - March 2026 Updaterobertkinglawfirm.com

    Meta faces over 1,400 similar lawsuits consolidated in federal court; a bellwether trial in California is currently in jury deliberations.

  16. [16]
    Attorney General James and Multistate Coalition Sue Meta for Harming Youthag.ny.gov

    A multistate coalition of attorneys general filed suit against Meta in 2023 for harming youth through addictive platform features and deceptive safety practices.