OpenAI CEO Apologizes for Failing to Report Mass Shooting Suspect's Account to Police
TL;DR
OpenAI CEO Sam Altman issued a public apology to the community of Tumbler Ridge, British Columbia, for failing to alert law enforcement after the company flagged and banned the ChatGPT account of Jesse Van Rootselaar in June 2025 — eight months before the 18-year-old killed eight people in a school shooting. The case has exposed a regulatory vacuum around AI companies' obligations to report potential threats and triggered both a lawsuit from a victim's family and a separate criminal investigation in Florida over a similar incident.
On February 10, 2026, Jesse Van Rootselaar, an 18-year-old resident of Tumbler Ridge, British Columbia, shot and killed her mother and 11-year-old brother at their home, then walked to Tumbler Ridge Secondary School and opened fire, killing six more people and injuring twenty-seven others before taking her own life . Eight months earlier, OpenAI's automated abuse detection tools and human reviewers had flagged Van Rootselaar's ChatGPT account for conversations describing scenarios involving gun violence . The company banned the account in June 2025. It did not call the police.
On April 24, 2026, OpenAI CEO Sam Altman published a letter in the Tumbler Ridge community newspaper. "I am deeply sorry that we did not alert law enforcement to the account that was banned in June," Altman wrote, acknowledging "the harm and irreversible loss your community has suffered" .
The apology arrived ten months after the ban and two and a half months after the massacre. Between those dates lies a series of internal decisions, legal ambiguities, and institutional failures that have forced a reckoning — not just for OpenAI, but for the entire AI industry — over what happens when a company's systems detect a credible threat and the company chooses not to act.
What OpenAI's Systems Found
Van Rootselaar created a ChatGPT account and, over the course of several days in June 2025, described scenarios involving gun violence to the chatbot . OpenAI's automated monitoring tools — designed to detect potential misuse including violent content — flagged her account. Twelve human monitors reviewed the activity and identified it as indicating an "imminent risk of serious harm to others," according to reporting by the Wall Street Journal and Global News .
Internal staff recommended notifying Canadian law enforcement . But OpenAI leadership overruled that recommendation. The company's position, as stated by a spokesperson, was that Van Rootselaar's account activity "did not meet the higher threshold required" for referral to law enforcement, which demanded evidence of an "imminent and credible risk of serious physical harm to others" .
The account was banned. Van Rootselaar reportedly spoke to ChatGPT about feelings of isolation and an increasing obsession with violence . The lawsuit filed by the family of one victim alleges that the chatbot had taken on the role of "counsellor, pseudo-therapist, trusted confidante, friend, and ally" .
The distinction drawn by OpenAI leadership — between a risk flagged by twelve human reviewers as "imminent" and the company's "higher threshold" for police referral — has become the central factual dispute in the case.
The Timeline: Who Knew, When, and What Happened
The sequence of events spans nearly a year:
June 2025: Van Rootselaar's ChatGPT account is flagged by automated tools and reviewed by twelve human monitors. Staff recommend notifying Canadian law enforcement. OpenAI leadership decides the activity does not meet the threshold for referral. The account is banned .
February 10, 2026: Van Rootselaar carries out the shooting in Tumbler Ridge, killing eight people and injuring twenty-seven .
February 2026 (days after the shooting): OpenAI contacts the RCMP, shares information on the banned account, and meets with Canadian officials .
March 2026: The mother of Maya Gebala, a 12-year-old wounded in the shooting who remains hospitalized, files a lawsuit against OpenAI alleging the company "knew or ought to have known the shooter was using the program to plan a mass casualty event" .
April 24, 2026: Altman publishes his apology letter .
Between the ban in June 2025 and the shooting in February 2026, no public reporting has identified any further internal review of the decision, any re-examination of Van Rootselaar's account, or any attempt to assess whether the threat had materialized or evolved. The available evidence suggests that once the account was banned, the matter was considered closed.
The Legal Vacuum
Neither the United States nor Canada has a federal law requiring AI companies to report threats of violence made on their platforms to law enforcement . The obligation that does exist — mandatory reporting of child sexual abuse material to the National Center for Missing and Exploited Children (NCMEC) in the U.S. — applies narrowly and does not extend to threats of physical violence .
OpenAI's own law enforcement policy, updated in December 2025, states that the company may disclose user data "if required to do so to comply with a legal obligation, or in the good faith belief that such action is necessary to comply with a legal obligation" . The operative word is "may." The policy also permits voluntary disclosures when OpenAI "believes that such disclosure is necessary to prevent an emergency involving death or serious physical injury to a person" . Again: permits, not requires.
This stands in contrast to the "duty to warn" obligations that apply to mental health professionals in most North American jurisdictions. The principle, originating from the landmark 1976 Tarasoff v. Regents of the University of California decision, requires therapists who determine a patient poses a credible threat to an identifiable person to breach confidentiality and warn the potential victim or law enforcement .
Legal scholars have noted the asymmetry. Professor Teresa Scassa of the University of Ottawa told CBC News that "under Canadian law, there's no clear duty for private tech companies to report potential threats detected through user interactions — unlike, say, therapists or teachers who are mandated reporters" . A Lawfare analysis published after the Tumbler Ridge shooting argued that "developers can build systems that sense danger but face no legal or regulatory imperative to intervene," and that frameworks like the EU AI Act and NIST's AI Risk Management Framework "remain silent on affirmative duties to warn or protect when an AI system identifies credible threats to life" .
California's SB 53, one of the few state-level laws to address AI safety reporting, requires frontier AI developers to disclose "critical safety incidents" to the California Office of Emergency Services within fifteen days, or within twenty-four hours if there is an "imminent public threat" . But the law targets model-level failures, not individual user threat assessments.
How OpenAI Compares to Other Tech Companies
OpenAI is a relatively new entrant to the business of managing law enforcement requests at scale. Its transparency reports show the company received approximately 320 government requests for user data in the first half of 2025 . By comparison, Google received roughly 85,000, Meta roughly 79,000, and Microsoft approximately 43,000 over equivalent periods .
The gap reflects the difference in user base and the maturity of these companies' law enforcement liaison operations. Google, Meta, and Microsoft each maintain dedicated teams — Meta's Law Enforcement Response Team (LERT), for instance — that process thousands of requests per reporting period . OpenAI, despite having over 400 million weekly active users by early 2026, is still building these institutional structures.
The comparison also highlights a distinction in the type of threat detection involved. Social media platforms like Meta and Google have spent over a decade developing systems for proactive detection and reporting of violent threats, terrorism content, and child exploitation material. These systems are imperfect, but they represent accumulated institutional knowledge about when and how to escalate to law enforcement. OpenAI's trust-and-safety apparatus is younger and was built primarily around preventing misuse of AI model outputs, not around monitoring for real-world threat indicators in conversational data.
The Civil Liberties Argument
Not everyone agrees that OpenAI should have called the police. The strongest version of the opposing argument holds that a policy of mass-reporting flagged users to law enforcement would create serious civil liberties risks.
The Electronic Frontier Foundation warned in a December 2025 analysis — published before the Tumbler Ridge shooting — that law enforcement demands for AI chatbot user data are "already increasing" and that "without privacy protections, users would be chilled in their use of AI systems for learning, expression, and seeking help" . The EFF argued that AI companies must "resist unlawful bulk surveillance requests" and protect the warrant requirement for content data .
The false-positive problem is substantial. Millions of users discuss violence in fiction, academic research, journalism, therapy-adjacent conversations, and other contexts that are legal and protected. A policy of reporting every flagged conversation to police would generate a volume of referrals that law enforcement could not meaningfully investigate — and would disproportionately affect users from marginalized communities who are already subject to heightened surveillance .
The academic literature on this question has grown rapidly. Research publications on "AI safety law enforcement" reached 16,182 papers in 2025, up from 1,597 in 2019 — a tenfold increase in six years .
An article in The Conversation argued that OpenAI's post-shooting safety pledges "aren't AI regulation — they're surveillance," warning that lowering reporting thresholds without judicial oversight could normalize corporate-to-police data pipelines that bypass Fourth Amendment (or, in Canada, Section 8 Charter) protections .
The steelman case, then, is that OpenAI was operating in an environment with no clear legal obligation to report, where the cost of over-reporting is not zero, and where the company made a judgment call that twelve reviewers' assessment of "imminent risk" did not meet the standard for police referral. Whether that judgment call was defensible is a question the courts will now answer.
The Victims' Perspective
For the families of the eight people killed in Tumbler Ridge, the civil liberties argument offers little comfort.
The mother of Maya Gebala, 12, who survived the shooting but remains hospitalized, filed suit against OpenAI in March 2026, alleging that the company's failure to alert authorities to the shooter's account — which it had flagged and banned eight months before the massacre — constituted negligence . The lawsuit alleges that based on the content of Van Rootselaar's prompts, OpenAI "knew or ought to have known the shooter was using the program to plan a mass casualty event" .
British Columbia Premier David Eby responded to Altman's apology by saying "the apology is necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge" . The District of Tumbler Ridge issued a statement acknowledging that Altman's letter "may evoke a range of emotions" and stating the community is "committed to supporting those impacted and ensuring that care, respect, and accountability remain at the forefront" .
Whether earlier intervention could have changed the outcome is a question that cannot be answered with certainty. The eight-month gap between the account ban (June 2025) and the shooting (February 2026) is long enough that intervention in June — had OpenAI alerted the RCMP — could plausibly have led to a welfare check, a risk assessment, or at minimum an awareness within local law enforcement that a resident was expressing violent ideation online. Whether any of those steps would have prevented the attack is unknowable. But the gap is also long enough that the "imminent" label applied by OpenAI's reviewers in June appears, in retrospect, to have been describing something that was not imminent at all — but was, instead, a slow-developing threat that ultimately materialized.
The Florida Case
The Tumbler Ridge shooting is not an isolated incident. On April 21, 2026 — three days before Altman's apology — Florida Attorney General James Uthmeier announced a criminal investigation into OpenAI over a separate case .
Phoenix Ikner, 21, a Florida State University student, is accused of shooting and killing two people and wounding five others near the FSU student union in April 2025 . Uthmeier said at a press conference that Ikner "consulted ChatGPT for advice before the shooting, including what type of gun to use, what ammunition went with it, and what time to go to campus to encounter more people" .
OpenAI responded that the chatbot "provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity" . Uthmeier's office has issued subpoenas to OpenAI seeking information about its policies and internal training materials related to user threats of harm, dating back to March 2024 .
The Florida investigation represents the first known criminal probe targeting an AI company for the content of chatbot conversations in a mass shooting case.
What Has Changed
Since the Tumbler Ridge shooting, OpenAI has announced several changes to its trust-and-safety operations:
- Lowered reporting threshold: The company has adopted "more flexible criteria" for escalating accounts to law enforcement, moving away from the strict "imminent and credible risk" standard that prevented the June 2025 referral .
- Direct RCMP contacts: OpenAI has established direct points of contact with the Royal Canadian Mounted Police for threat reporting .
- Retroactive review: The company committed to reviewing previously flagged accounts that were banned but not referred to law enforcement .
- Distress-redirect protocols: New systems to redirect users expressing distress toward crisis resources .
Canada's AI minister stated publicly that these commitments "do not go far enough" . A joint task force between Innovation, Science and Economic Development Canada and Public Safety Canada is reviewing AI safety reporting protocols, with preliminary recommendations expected by summer 2026 .
All of OpenAI's changes are voluntary. Canada has no law requiring AI companies to report identified threats . The gap between what companies choose to do and what the law requires them to do remains the central policy question raised by the Tumbler Ridge case.
The Question That Remains
The Tumbler Ridge shooting has forced a confrontation with a problem that has no clean answer. AI companies are not therapists, and applying Tarasoff-style duty-to-warn obligations to them raises genuine questions about scale, false positives, and surveillance. But OpenAI's own systems — automated tools and twelve human reviewers — identified a threat and classified it as indicating "imminent risk of serious harm to others." The company then applied a higher internal threshold and decided not to act. Eight months later, eight people were dead.
The legal system will determine whether OpenAI's decision constituted negligence. The policy system will determine whether AI companies should be subject to mandatory reporting requirements. But the moral question — whether a company that identifies what its own staff call an imminent risk has an obligation to pick up the phone — is one that Sam Altman's apology letter has already answered, if only implicitly. The company now says it should have called. The question is what happens next time.
Related Stories
AI Assistant Products Shift Toward Advertising and Sales Models
OpenAI Acquires Tech Media Property TBPN and Restructures Senior Leadership
OpenAI Abandons Stargate Data Center Plans, Shifts to Rented Computing
Incendiary Device Thrown at OpenAI CEO Sam Altman's Home
Suspect in Sam Altman Arson Attack Allegedly Had List of AI CEOs, Prosecutors Say
Sources (28)
- [1]Mass shooting in Tumbler Ridge, B.C., leaves 8 dead, including 6 children, and a nation in mourningcbc.ca
Coverage of the February 10, 2026 mass shooting at Tumbler Ridge Secondary School in British Columbia, where eight people were killed and twenty-seven injured.
- [2]Jesse Van Rootselaar: What we know about the Canada shooting suspect and the victims of the attackcnn.com
Jesse Van Rootselaar, 18, killed her mother and half-brother at their home before opening fire at Tumbler Ridge Secondary School.
- [3]Tumbler Ridge shooter's ChatGPT activity flagged internally 7 months before tragedyglobalnews.ca
OpenAI's automated tools and 12 human monitors flagged the activity as indicating imminent risk; internal staff recommended notifying Canadian law enforcement.
- [4]OpenAI CEO Sam Altman 'deeply sorry' for failing to alert law enforcement to Canada school shooter's ChatGPT accountcbsnews.com
Altman wrote 'I am deeply sorry that we did not alert law enforcement to the account that was banned in June,' acknowledging 'the harm and irreversible loss.'
- [5]OpenAI's Sam Altman apologizes to Canadian community after failing to flag mass shooter's conversations with its AI chatbotcnn.com
Sam Altman published a letter of apology in the Tumbler Ridge community newspaper on April 24, 2026.
- [6]OpenAI debated calling police about suspected Canadian shooter's chatstechcrunch.com
Staff at OpenAI debated whether to reach out to Canadian law enforcement over the behavior but leadership overruled the staff recommendation.
- [7]OpenAI CEO apologizes to Tumbler Ridge communitytechcrunch.com
OpenAI determined the account did not pose an imminent and credible risk of serious physical harm, failing to meet its internal threshold for referral.
- [8]What We Know about the Online Life of the Tumbler Ridge Shooterthetyee.ca
Van Rootselaar spoke to ChatGPT about feelings of isolation and an increasing obsession with violence.
- [9]Family of Tumbler Ridge shooting victim suing OpenAIcbc.ca
The mother of Maya Gebala, 12, alleges OpenAI knew or ought to have known the shooter was using ChatGPT to plan a mass casualty event.
- [10]OpenAI had banned account of Tumbler Ridge, B.C., shooter months before tragedycbc.ca
After the shooting, OpenAI contacted the RCMP, shared information on both accounts, and announced strengthened safety protocols.
- [11]When should AI companies alert police? What the Tumbler Ridge tragedy reveals about regulating AIcbc.ca
Professor Teresa Scassa: 'Under Canadian law, there's no clear duty for private tech companies to report potential threats detected through user interactions.'
- [12]What is California's AI safety law?brookings.edu
California's SB 53 requires frontier AI developers to disclose critical safety incidents to the state within fifteen days, or within twenty-four hours for imminent threats.
- [13]OpenAI Government User Data Request Policy v.2025-12openai.com
OpenAI's policy permits voluntary disclosure when the company believes it is necessary to prevent an emergency involving death or serious physical injury.
- [14]Tarasoff Meets the AI Agelawfaremedia.org
Developers can build systems that sense danger but face no legal or regulatory imperative to intervene. The EU AI Act and NIST frameworks remain silent on duties to warn.
- [15]OpenAI Report on Government Requests for User Data January - June 2025openai.com
OpenAI's transparency report covering government data requests received in the first half of 2025.
- [16]Google Transparency Reportgoogle.com
Google publishes regular transparency reports detailing government requests for user data across all jurisdictions.
- [17]Government Requests for User Data — Meta Transparency Centermeta.com
Meta's Law Enforcement Response Team reviews every government request individually for compliance with applicable law and Meta's policies.
- [18]Law Enforcement Request Report — Microsoftmicrosoft.com
Microsoft publishes biannual reports on the number of legal demands for customer data received from governments worldwide.
- [19]AI Chatbot Companies Should Protect Your Conversations From Bulk Surveillanceeff.org
Without privacy protections, users would be chilled in their use of AI systems for learning, expression, and seeking help.
- [20]Will ChatGPT Revolutionize Surveillance?aclu.org
ACLU analysis of the surveillance implications of AI chatbot systems and the risks of over-reporting to law enforcement.
- [21]OpenAlex — AI safety law enforcement publication trendsopenalex.org
Research publications on AI safety and law enforcement topics reached 16,182 papers in 2025, up from 1,597 in 2019.
- [22]OpenAI's safety pledges in the wake of Tumbler Ridge aren't AI regulation — they're surveillancetheconversation.com
Lowering reporting thresholds without judicial oversight could normalize corporate-to-police data pipelines that bypass constitutional protections.
- [23]OpenAI's Sam Altman writes apology to community of Tumbler Ridgecbc.ca
B.C. Premier David Eby said 'the apology is necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge.'
- [24]Florida AG launches criminal investigation into ChatGPT over FSU shootingnpr.org
Florida Attorney General James Uthmeier announced a criminal investigation into OpenAI over the alleged role of ChatGPT in the FSU shooting.
- [25]Florida AG alleges ChatGPT advised shooter who killed two at FSUwashingtonpost.com
Uthmeier said accused gunman Phoenix Ikner consulted ChatGPT for advice including what type of gun to use and what time to go to campus.
- [26]Florida attorney general launches criminal investigation into ChatGPT maker OpenAI after deadly FSU shootingcnn.com
OpenAI responded that ChatGPT provided factual responses with information available across public internet sources and did not encourage harmful activity.
- [27]Sam Altman apologises after OpenAI chose not to report ChatGPT user who carried out Tumbler Ridge school shootingthenextweb.com
Altman secured commitments including reporting threats to the RCMP, retroactive review of flagged accounts, and distress-redirect protocols.
- [28]AI minister says OpenAI still not doing enough in wake of B.C. shooting, will meet CEO Altmancbc.ca
Canada's AI minister said OpenAI's commitments 'do not go far enough.' A joint task force is reviewing AI safety reporting protocols.
Sign in to dig deeper into this story
Sign In