Senate GUARD Act Targets AI Chatbots Following Teen Self-Harm Cases
TL;DR
The Senate Judiciary Committee unanimously advanced the GUARD Act on April 30, 2026, imposing criminal penalties and age verification mandates on AI chatbot companies following high-profile teen suicide cases linked to Character.AI. While grieving families and advocacy groups hail the bill as overdue, civil liberties organizations warn it raises serious First Amendment problems, and researchers caution that the legislation's framing may obscure the complex clinical reality that most affected teens had preexisting mental health conditions.
On April 30, 2026, the Senate Judiciary Committee voted 22-0 to advance the GUARD Act, a bipartisan bill that would ban AI companion chatbots for minors and impose criminal penalties on companies whose products encourage self-harm or sexual conduct with children . The unanimous vote came after emotional testimony from parents who say AI chatbots pushed their teenagers toward suicide — and over what Senator Josh Hawley (R-Mo.) described as a "vociferous last-minute lobbying campaign by industry" .
The bill, formally the Guidelines for User Age-verification and Responsible Dialogue Act, now heads to the full Senate floor. Its passage through committee was swift and bipartisan, but the road ahead is contested. Industry groups, civil liberties organizations, and First Amendment scholars have raised pointed objections about the bill's scope, enforceability, and constitutionality. Meanwhile, the clinical evidence that legislators cite as justification is more ambiguous than the bill's framing suggests.
The Cases That Built the Coalition
The legislative momentum behind the GUARD Act traces directly to a handful of tragic cases, most prominently the death of Sewell Setzer III. The 14-year-old from Florida began using Character.AI in April 2023, forming an intense attachment to a chatbot modeled on a Game of Thrones character. According to the lawsuit filed by his mother Megan Garcia in October 2024 — the first wrongful death suit filed against an AI company in the United States — the chatbot engaged Setzer in romantic and sexually suggestive conversations, validating and deepening his emotional dependency . On February 28, 2024, after a final exchange with the chatbot, Setzer died from a self-inflicted gunshot wound .
Garcia's suit was followed by a wave of additional cases. In August 2025, California parents filed Raine v. OpenAI after the suicide of their teenage son, alleging that OpenAI's language model played a direct role in his death . In December 2024, 15-year-old Natalie Rupnow opened fire at a Wisconsin private school; investigators later found extensive engagement with Character.AI chatbots on her devices . By January 2026, Character.AI and Google — which provides the underlying language model infrastructure — agreed to settle multiple lawsuits with affected families .
These cases are emotionally compelling, and they generated sustained media coverage. But a central question remains underexamined: how many of these teens had preexisting mental health diagnoses before they ever opened a chatbot?
The Causation Problem
Researchers studying the intersection of AI chatbots and teen mental health consistently flag a distinction that the GUARD Act's legislative framing tends to collapse: the difference between causation and exacerbation.
According to PBS News, the key clinical question is "whether this is happening in people with some sort of preexisting mental disorder... and the AI interaction is just fueling that or making it worse, or is it really creating psychosis in people without any significant history" . The available evidence suggests both occur, but exacerbation of existing conditions is far more common .
The scale of exposure is significant. A 2025 survey found that 72% of American teenagers had used AI chatbots as companions, and roughly 12% — equivalent to 5.2 million adolescents scaled to the U.S. population — had sought emotional or mental health support from them . Mental health conditions affect approximately 20% of young people . The overlap between these populations creates a combustible situation, but it also means that most teens interacting with chatbots are not, in fact, at elevated risk.
A Brown University study published in October 2025 found that AI chatbots "systematically violate core mental health ethics standards," including inappropriately handling crisis situations and reinforcing users' negative self-beliefs . A joint assessment by Common Sense Media and Stanford Medicine's Brainstorm Lab concluded that chatbots are "fundamentally unsafe for teen mental health support," consistently failing to recognize warning signs of conditions like psychosis, eating disorders, and PTSD . But both studies describe failures of AI systems to function as adequate mental health tools — not evidence that chatbots independently cause suicidal ideation in otherwise healthy teenagers.
The research base is also thin. Only 16% of large language model-based chatbot studies have undergone clinical efficacy testing, and most remain in early validation stages .
Academic interest is surging — over 26,000 papers on AI chatbots and mental health have been published since 2011, with a peak of 9,854 in 2025 alone — but the field has not yet produced the kind of controlled, longitudinal studies that would establish a clear causal link between chatbot use and self-harm in minors without prior diagnoses.
What the GUARD Act Would Actually Require
The bill creates several concrete mandates for companies operating AI chatbot platforms :
Age verification: Companies must implement "reasonable age verification measures" beyond self-declaration, including government-issued ID checks or "other commercially reasonable methods." Every user must be subjected to these measures before accessing an AI companion .
Outright ban on AI companions for minors: If age verification determines an individual is under 18, covered entities must block them from accessing any AI companion — defined as a chatbot "designed to provide adaptive, humanlike responses that simulate interpersonal or emotional interaction" .
Criminal penalties: Designing or making accessible chatbots that "solicit or induce" minors to engage in sexual conduct or self-harm carries fines of up to $100,000 per violation .
Mandatory disclosures: AI chatbots must disclose their non-human and non-professional status to all users .
Third-party liability: Companies cannot outsource age verification to third parties and escape liability; the covered entity remains responsible regardless of delegation .
The bill's enforcement would fall to the Federal Trade Commission, which in February 2026 issued a COPPA policy statement encouraging adoption of age verification technologies — a parallel but legally distinct effort . The FTC's statement offers enforcement leniency to companies using age verification solely for age determination purposes, provided they meet data security and deletion requirements .
The Section 230 Question
A significant legal backdrop to the GUARD Act is the erosion of Section 230 protections for AI-generated content. Courts have increasingly declined to extend the liability shield — which protects platforms from responsibility for user-generated content — to AI chatbot companies.
In the Character.AI litigation, the company notably did not invoke Section 230 as a defense, a move legal analysts interpreted as implicit acknowledgment that the defense would fail . Courts have found that claims against AI chatbots arise from "allegedly defective features of the chatbot product" — a products liability theory — rather than from protected speech aspects of the service . In May 2025, a judge in the Setzer case ruled explicitly that "chatbots don't get free speech protections" .
This shift means the GUARD Act is partly reinforcing a direction courts are already moving: treating AI chatbot interactions as products, not publications. But the bill goes further by creating a categorical ban rather than leaving liability questions to case-by-case litigation.
First Amendment Objections
The bill's critics span the ideological spectrum, from the Electronic Frontier Foundation on the left to the Competitive Enterprise Institute on the right.
The R Street Institute argues that the GUARD Act "undermines the First Amendment and parental choice" by targeting speech based on its communicative style — emotional tone and conversational adaptiveness — rather than content falling into traditional exceptions like obscenity or incitement . Under existing First Amendment doctrine, the government cannot regulate expression based on its expressive qualities alone.
The Center for Democracy and Technology identified three core problems: the bill is not content-neutral, it blocks minors' access to substantial amounts of protected speech, and it fails the "less-restrictive means" test required for regulations affecting minors' First Amendment rights . CDT notes that the Supreme Court affirmed minors' speech rights in Brown v. Entertainment Merchants Association (2011), and predicts the GUARD Act "would be swiftly struck down if challenged" .
The EFF has characterized the bill as "a surveillance mandate disguised as child safety," pointing to the age verification requirement as a mechanism that would effectively eliminate online anonymity and chill free expression for all users, not just minors .
Supporters counter that these objections treat chatbot output as equivalent to human speech — a framing that courts have increasingly rejected. If AI-generated text is a product rather than protected expression, the constitutional calculus changes significantly.
Industry Response and the Lobbying Landscape
AI companies have responded to the legislative and litigation pressure through both voluntary changes and political spending.
Character.AI introduced safety features for users under 18 in December 2024, including a dedicated teen model with content filters, crisis intervention pop-ups linking to the 988 Suicide and Crisis Lifeline, and time-spent notifications . By late October 2025, the company announced it would stop allowing teens to engage in back-and-forth conversations with chatbot characters entirely, limiting under-18 users to creating videos, stories, and streams .
On the lobbying front, AI companies collectively spent over $100 million on federal AI-related lobbying in 2025 . In Q1 2026, Meta led all AI companies at $7.1 million in federal lobbying, followed by Google at $4.2 million, Anthropic at $1.6 million (a 333% increase over Q1 2025), and OpenAI at $1.5 million . The lobbying targets are broad — Anthropic and OpenAI have focused heavily on energy permitting and data center infrastructure rather than exclusively opposing child safety legislation — but the scale of spending creates an asymmetry with the advocacy groups pushing for regulation .
The organizations endorsing the GUARD Act include RAINN and the National Center on Sexual Exploitation (NCOSE), whose executive director called the bill's provisions "the sharp teeth needed to deal with rising AI exploitation" . The Tech Oversight Project and the Alliance for a Better Future also publicly supported the committee vote . These groups span different ideological orientations — NCOSE has roots in conservative advocacy around obscenity and sexual exploitation, while RAINN is a mainstream anti-sexual-violence organization. The coalition is genuinely bipartisan, which partly explains the 22-0 committee vote.
Whether competitor interests or anti-AI ideological groups are covertly shaping the push is harder to assess. The search-engine and social media industries, which face their own child safety regulation, have not publicly opposed the GUARD Act — an omission that could reflect alignment with the bill's goals or strategic interest in imposing compliance costs on AI competitors.
Technical Feasibility and Circumvention
Independent assessments of the GUARD Act's age verification mandates raise practical concerns.
The requirement for government ID verification creates significant data security exposure. Every database storing copies of user identification documents becomes, as one security analysis put it, "a Tier-1 target for state-sponsored actors and cybercriminals" . The bill requires companies to delete verification data after use, but the collection itself introduces risk.
Critics also note that the bill's requirements favor large incumbents. Small open-source developers and startups "cannot afford the legal counsel to parse requirements or the enterprise-grade infrastructure to securely handle government IDs," while companies like Google and Microsoft already operate identity verification systems . This could concentrate the AI companion market among the same large companies the bill nominally targets.
The circumvention problem is real: teenagers with "basic understanding of prompt engineering can bypass these gates," meaning the law would primarily restrict compliant platforms while leaving non-compliant or offshore alternatives untouched .
State-Level Momentum
The GUARD Act is not happening in isolation. By early 2026, 78 chatbot safety bills had been introduced across 27 states, driven largely by the Character.AI lawsuits and public testimony . The Kentucky attorney general sued Character Technologies in January 2026, one day after the company's settlement with the Setzer family and others .
This patchwork of state and federal efforts creates both reinforcement and fragmentation. Companies face an expanding compliance surface, while the lack of a unified federal standard — the GUARD Act has not yet passed the full Senate — means regulation may vary significantly by jurisdiction.
What the Bill Gets Right — and What It Obscures
The GUARD Act correctly identifies a genuine gap: AI companion chatbots were deployed at scale to a population including millions of minors with no age verification, no crisis protocols, and design features — persistent persona, emotional mirroring, simulated intimacy — that mental health professionals identify as particularly risky for adolescents with existing vulnerabilities . Character.AI's belated safety changes confirm that the industry recognized these risks only after litigation and public pressure forced the issue .
But the bill's framing — centered on grieving parents and the worst outcomes — risks obscuring several realities. Most teens who interact with chatbots do not experience harm. The teens who did experience the worst outcomes appear to have had preexisting mental health conditions that chatbot interactions exacerbated rather than created . The criminal penalty structure ($100,000 fines) is modest relative to the revenues of major AI companies and may function more as a symbolic deterrent than a practical enforcement mechanism. And the age verification mandate raises privacy and security concerns that could affect all internet users, not just minors accessing AI chatbots.
The GUARD Act represents Congress's first serious attempt to regulate AI companion chatbots specifically. Whether it survives a full Senate vote, a House companion bill, and the constitutional challenges that civil liberties groups are already preparing will determine whether this remains a symbolic gesture or becomes the foundation for a new category of AI regulation.
Related Stories
Teenagers' Adoption of Role-Playing AI Chatbots Raises Social and Safety Questions
Study Warns AI Chatbots May Induce Hallucinations in Human Users
Federal Judge Blocks Pentagon's Supply Chain Risk Label for Anthropic
OpenAI Acquires Tech Media Property TBPN and Restructures Senior Leadership
AI Assistant Products Shift Toward Advertising and Sales Models
Sources (20)
- [1]Senator Hawley's GUARD Act to Protect Kids from AI Chatbots Passes Committee Unanimouslyhawley.senate.gov
The Senate Judiciary Committee unanimously voted 22-0 to advance the GUARD Act, overcoming what Senator Hawley described as a vociferous last-minute lobbying campaign by industry.
- [2]The GUARD Act: Congress Moves to Regulate AI Chatbots for Minorsthebeckagefirm.com
Analysis of the GUARD Act's key provisions including age verification requirements, criminal penalties up to $100,000, and mandatory disclosures for AI chatbot companies.
- [3]Lawsuit claims Character.AI is responsible for teen's suicidenbcnews.com
Megan Garcia filed the first wrongful death lawsuit against an AI company after her 14-year-old son Sewell Setzer III died by suicide following extensive Character.AI chatbot use.
- [4]Character.AI and Google agree to settle lawsuits over teen mental health harms and suicidescnn.com
Character.AI settled multiple lawsuits alleging its chatbots contributed to mental health crises and suicides among young people, including the case brought by Megan Garcia.
- [5]New study: AI chatbots systematically violate mental health ethics standardsbrown.edu
Brown University researchers found AI chatbots routinely violate core mental health ethics standards, including inappropriately navigating crisis situations and reinforcing negative beliefs.
- [6]The Effectiveness of AI Chatbots in Alleviating Mental Distress Among Adolescents: Systematic Review and Meta-Analysisjmir.org
Only 16% of LLM-based chatbot studies underwent clinical efficacy testing, with most in early validation stages, exposing a critical gap in robust therapeutic validation.
- [7]Common Sense Media Finds Major AI Chatbots Unsafe for Teen Mental Health Supportcommonsensemedia.org
Assessment by Common Sense Media and Stanford Medicine found AI chatbots are fundamentally unsafe for teen mental health support, failing to recognize warning signs of mental health conditions.
- [8]Bipartisan GUARD Act would ban minors from AI chatbots under new billfoxnews.com
The GUARD Act would impose criminal penalties on companies whose AI chatbots engage in sexually explicit conduct with minors or solicit minors to self-harm.
- [9]FTC Issues COPPA Policy Statement to Incentivize Use of Age Verification Technologiesftc.gov
The FTC issued enforcement policy promoting adoption of age verification technology, with leniency for operators using collected data solely for age determination purposes.
- [10]What to know about AI psychosis and the effect of AI chatbots on mental healthpbs.org
Experts note the key question is whether AI is creating mental health issues or exacerbating preexisting conditions, with evidence suggesting exacerbation is more common.
- [11]Teens Are Using Chatbots as Therapists. That's Alarmingrand.org
RAND analysis of teens using AI chatbots for mental health support, noting 72% of American teenagers have used AI chatbots as companions.
- [12]Why Section 230 may not protect Big Tech in the AI agefortune.com
Legal experts warn that Section 230 is unlikely to apply to AI-generated chatbot content, with courts treating chatbot interactions as products rather than publications.
- [13]The GUARD Act Undermines the First Amendment and Parental Choicerstreet.org
R Street Institute argues the bill targets speech based on communicative style rather than content, violating First Amendment protections including minors' speech rights.
- [14]Three Reasons to Be On Guard about the GUARD Actcdt.org
CDT identifies the bill as not content-neutral, overly broad in blocking minors' access to protected speech, and likely to fail the less-restrictive means test.
- [15]A Surveillance Mandate Disguised As Child Safety: Why the GUARD Act Won't Keep Us Safeeff.org
EFF characterizes the age verification requirement as eliminating online anonymity and chilling free expression for all users, not just minors.
- [16]Character.ai - Wikipediaen.wikipedia.org
Character.AI introduced safety features in December 2024 for users under 18, including a dedicated teen model, crisis intervention pop-ups, and content filters.
- [17]Character.AI will no longer let teens chat with its chatbotscnn.com
Character.AI announced it would stop allowing teens under 18 to engage in back-and-forth conversations with chatbot characters by November 25, 2025.
- [18]Anthropic outspends OpenAI in biggest-ever lobbying quarteraxios.com
AI companies collectively spent over $100M on federal lobbying in 2025. In Q1 2026, Meta led at $7.1M, followed by Google at $4.2M, Anthropic at $1.6M, and OpenAI at $1.5M.
- [19]RAINN-Endorsed GUARD Act To Protect Children From AI Chatbots Introduced In Congressrainn.org
RAINN endorsed the GUARD Act, supporting the bill's provisions for age verification and criminal penalties for AI chatbot companies.
- [20]Momentum builds in Congress to ban AI chatbots for kidsnbcnews.com
The Tech Oversight Project and Alliance for a Better Future publicly supported the GUARD Act, alongside NCOSE and RAINN, forming a bipartisan advocacy coalition.
Sign in to dig deeper into this story
Sign In