Suspect in Sam Altman Firebombing Cited Fears That AI Would Destroy Humanity
TL;DR
A 20-year-old man who identified online as a "Butlerian Jihadist" and wrote about AI-driven human extinction threw a Molotov cocktail at OpenAI CEO Sam Altman's San Francisco home on April 10, 2026, marking a sharp escalation in anti-AI activism. The attack, which caused no injuries, has ignited debate over the rhetorical responsibilities of the AI safety movement, the security of tech executives, and whether legitimate criticism of artificial intelligence will be chilled by association with violence.
At 3:40 a.m. on April 10, 2026, a Molotov cocktail struck the metal gate of Sam Altman's home in San Francisco's Russian Hill neighborhood. The homemade incendiary device bounced off, starting a small fire at the exterior gate but causing no structural damage and no injuries . About an hour later, a man matching the suspect's description appeared outside OpenAI's headquarters in the Mission Bay district and threatened to "burn down the building" . San Francisco police arrested 20-year-old Daniel Alejandro Moreno-Gama at the scene .
The attack was not a random act. According to communications researcher Nirit Weiss-Blatt, Moreno-Gama operated on Discord under the handle "Butlerian Jihadist" — a reference to Frank Herbert's Dune, in which humanity wages war against thinking machines . He was an active participant in the PauseAI Discord server, where in early December he wrote: "We are close to midnight, it's time to actually act" . A moderator warned him that calls for violence would result in a ban. On his Substack, he published six lengthy posts between January and March 2026, including one titled "A Eulogy for Man," which warned of humanity's extinction through artificial intelligence .
This was not an isolated incident in the emerging landscape of anti-AI extremism. It was the most visible one yet — and it has forced an uncomfortable reckoning across the AI industry, the safety research community, and law enforcement.
The Charges
Moreno-Gama was booked into San Francisco County Jail on charges of attempted murder, arson, criminal threats, two counts of possession of an incendiary device, and two counts of possessing a destructive device with intent to injure . He is being held without bail .
As of this writing, federal prosecutors have not filed domestic terrorism charges. Under current U.S. law, there is no standalone federal domestic terrorism statute that would apply here — the legal framework relies on underlying offenses like arson and attempted murder, with terrorism enhancements applied at sentencing if political or ideological motivation is established . Whether Moreno-Gama's anti-AI ideology qualifies as a political motive under California or federal sentencing guidelines remains an open question that will likely be tested in court.
The Writings: Mainstream Ideas, Extreme Conclusions
Moreno-Gama's online output drew heavily from the mainstream AI safety discourse. The "Butlerian Jihadist" moniker itself has circulated widely in AI-skeptic circles — neuroscientist Erik Hoel published a widely read Substack essay titled "We Need a Butlerian Jihad Against AI," and Compact Magazine ran a similar piece . The metaphor has become common shorthand among those who believe advanced AI poses civilizational risk.
His writings referenced arguments that are standard fare in credentialed AI safety research. In May 2023, more than 500 prominent researchers and industry leaders — including Geoffrey Hinton, Yoshua Bengio, and the CEOs of OpenAI, DeepMind, and Anthropic — signed a statement from the Center for AI Safety declaring that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war" . Two months earlier, an open letter calling on AI labs to pause training systems more powerful than GPT-4 had attracted tens of thousands of signatures, including from Elon Musk and Bengio .
The academic literature on AI existential risk has exploded in recent years, with over 6,600 papers published in 2025 alone, up from roughly 870 in 2020 . Moreno-Gama's writings did not emerge from a fringe vacuum. They represented a distorted but internally coherent reading of arguments advanced by some of the most respected figures in computer science — pushed to a conclusion that none of those researchers endorse.
This is the central tension the attack has exposed. The gap between "AI extinction risk is real and urgent" — a position held by Turing Award winners — and "therefore physical violence against AI executives is justified" is enormous. But as the Moreno-Gama case demonstrates, some individuals are crossing that gap.
A Pattern Emerging
The attack on Altman's home did not occur in isolation. In late November 2025, Sam Kirchner, a co-founder of the Oakland-based group StopAI, disappeared after assaulting another member who refused to let him purchase a weapon with organization funds . Kirchner had made statements suggesting he intended to use the weapon "against employees of companies pursuing artificial superintelligence" . StopAI publicly reported his disappearance and alerted police. As of April 2026, Kirchner's whereabouts remain unknown .
Earlier, in late 2025, OpenAI had locked down its San Francisco offices following a separate threat alert linked to anti-AI activists .
These incidents follow a broader pattern of escalating threats against corporate executives. After the December 2024 assassination of UnitedHealthcare CEO Brian Thompson in New York City, the U.S. Department of Homeland Security warned that "individuals mobilized by economic grievances are using the murder of a health insurance CEO as inspiration for threats and attack plotting" . The Altman attack suggests that anti-AI ideology may represent a new vector within this trend.
Executive Protection in the AI Era
The security environment for tech executives has changed sharply since the Thompson killing. According to Allied Universal's 2025 World Security Report, threats against high-profile executives — especially in the tech sector — are surging, driven by social unrest, misinformation, and digital radicalization . Companies reported a 73.5% increase in security measures for executives between 2020 and 2024 .
Corporate spending on executive protection across the S&P 500 has nearly quadrupled since 2020, with major banks including Wells Fargo, Capital One, American Express, and Goldman Sachs all increasing CEO security budgets in 2025 . Companies are prioritizing travel security (54%), physical security (53%), cybersecurity (39%), and residential security (38%) .
OpenAI itself had been building out its security infrastructure, with job postings for risk analysts tasked with identifying physical security threats and supporting executive protection programs . Whether the company's security posture was adequate to prevent the Altman attack — which occurred at a private residence, not an office — is a question the company has not publicly addressed.
The SF Standard reported that the attack "ignited new worries among leaders of local tech companies, who are leery that growing fears about artificial intelligence could manifest into more threats or even violence against their executives and businesses" .
The AI Safety Community Responds
Within hours of the attack, StopAI — the more radical of the two prominent anti-AI groups — issued a statement: "We do not condone any violence whatsoever" while reiterating that "we continue to hope the AI industry stops the development of frontier AI systems in the interest of public safety and the preservation of humanity" . PauseAI removed Moreno-Gama from its Discord server .
PauseAI US also moved to distance itself from the broader StopAI organization, stating that "PauseAI does not work with StopAI and has not since StopAI was founded," and emphasizing its commitment to nonviolence and legal protest .
The condemnations were swift, but they left a harder question unanswered: to what extent does the "existential risk" framing — promoted not just by activist groups but by mainstream AI labs, including OpenAI itself — contribute to the kind of catastrophic thinking that can push vulnerable individuals toward violence?
Yoshua Bengio, one of the most prominent signatories of AI risk statements, has argued that the risks of advanced AI are real and that dismissing them is irresponsible . But the pathway from "this technology could end civilization" to "I must personally act with violence to prevent it" is one that the safety research community has not grappled with publicly, beyond generic calls for peaceful advocacy.
Altman's Response: Acknowledging Fear, Urging De-escalation
Altman published a blog post on the evening of April 11, sharing a photo of his family and writing that the attack left him unable to sleep . He acknowledged that "people's fear of AI is valid" and said he sees an urgent need for a "society-wide response" to AI's threats, including "new policies to manage what he expects will be a difficult economic transition" .
He also pointed to the timing: the attack came days after the publication of a lengthy New Yorker investigation by Ronan Farrow and Andrew Marantz, which drew on interviews with more than 100 people and described Altman as having "a relentless will to power that, even among industrialists who put their names on spaceships, sets him apart" . Altman called the article "incendiary" and suggested that its publication "at a time of great anxiety about AI" may have made things "more dangerous" .
He called for de-escalation: "We should de-escalate the rhetoric and tactics and try to have fewer explosions in fewer homes, figuratively and literally" . He also welcomed "good-faith criticism and debate" .
Critics noted an inherent tension in Altman's position. OpenAI's own messaging has for years emphasized the transformative and potentially dangerous nature of the technology it builds — Altman himself has said that if AI development goes wrong, "it goes really wrong" — while simultaneously pushing forward with development. The company's for-profit conversion, which has drawn legal challenges from Elon Musk and scrutiny from state attorneys general, has heightened perceptions that safety rhetoric serves as cover for commercial ambition .
The Chilling Effect Question
Perhaps the most consequential downstream effect of the attack is political. Several AI policy researchers have expressed concern that the incident could be used — by AI companies, their lobbyists, or sympathetic legislators — to reframe AI critics and safety advocates as dangerous.
This framing has a precedent. Environmental activism saw a similar dynamic: after high-profile acts of eco-terrorism in the 1990s and 2000s, the "eco-terrorist" label was applied broadly to peaceful environmental groups, and the FBI classified environmental activism as the top domestic terrorism threat for several years . Legislation like the Animal Enterprise Terrorism Act expanded criminal penalties in ways that critics argued chilled legitimate protest.
If anti-AI ideology is classified as an extremist threat, the consequences for policy debate could be significant. Groups like PauseAI, which operate within the law and advocate for regulatory action through lobbying and peaceful protest, could find their message tainted by association with violent actors. Researchers who publish on AI existential risk could face reputational pressure to soften their conclusions.
Altman's blog post, while calling for de-escalation, also drew a line between critical media coverage and the attack on his home — a link that, if adopted broadly, could cast investigative journalism about AI companies as reckless provocation.
A New Radicalization Pathway
The Moreno-Gama case does not fit neatly into established categories of lone-actor violence. He was young — 20 years old — and appeared to have radicalized online through a combination of AI safety Discord communities and Substack writing . His handle, drawn from science fiction, and his essay titles suggest someone who had constructed an elaborate ideological framework around AI risk.
This pattern shares features with other forms of online radicalization: immersion in communities organized around a perceived existential threat, escalation of rhetoric within those communities, and eventual action taken outside the group's sanctioned boundaries. The December warning from a PauseAI moderator, who told Moreno-Gama that calls for violence would result in a ban, suggests the community recognized the trajectory but lacked the tools — or the will — to intervene beyond a Discord moderation action .
City Journal, in a December 2025 article about the earlier Sam Kirchner case, argued that "StopAI and the broader fringes of the anti-AI movement provide ideological gloss to abstract visions of extreme violence" . Whether that assessment applies to the broader PauseAI movement, which has maintained a consistent commitment to nonviolence, is a separate and contested question.
What is clear is that anti-AI ideology has now produced at least two individuals who crossed from online rhetoric to real-world threats or violence within a six-month period. Whether this constitutes a trend or an aberration will depend on what happens next — and on how the AI safety community, law enforcement, and the technology industry choose to respond.
What Comes Next
The legal proceedings against Moreno-Gama will establish important precedents for how the justice system treats anti-AI violence. If prosecutors pursue terrorism enhancements, it will signal that anti-AI ideology is being categorized alongside other forms of domestic extremism. If they treat it as standard arson and attempted murder, it may reflect a judgment that the ideological dimension is incidental.
The AI safety community faces its own reckoning. The arguments that "AI could end humanity" are not going away — they are supported by serious research and credentialed scientists. But the gap between academic risk assessment and public communication has become a matter of physical safety. How researchers and advocates talk about catastrophic risk, and what guardrails they build around that rhetoric, will shape whether anti-AI extremism remains marginal or grows.
For Altman and other AI executives, the attack marks the end of an era in which the biggest threats to their companies were regulatory or competitive. The physical safety of the people building the world's most powerful AI systems is now a live concern — and the industry's security infrastructure is still catching up.
Related Stories
Incendiary Device Thrown at OpenAI CEO Sam Altman's Home
Sam Altman Responds to New Yorker Investigation Amid Renewed Scrutiny of OpenAI Leadership
OpenAI Acquires Tech Media Property TBPN and Restructures Senior Leadership
AI, Automation & the Future of American Jobs
OpenAI Abandons Stargate Data Center Plans, Shifts to Rented Computing
Sources (26)
- [1]Suspect arrested after incendiary device thrown at OpenAI CEO Sam Altman's homecnn.com
A man threw a Molotov cocktail at the home of OpenAI CEO Sam Altman at 3:45 a.m. Friday, with the homemade bomb bouncing off the house and causing no damage.
- [2]Man arrested after Sam Altman's house hit with Molotov cocktail, OpenAI headquarters threatenedcnbc.com
Daniel Alejandro Moreno-Gama, 20, allegedly threw a Molotov cocktail at Altman's home and later threatened to burn down OpenAI headquarters.
- [3]Suspect in Molotov cocktail attack on Sam Altman's home identifiedsfstandard.com
Police arrested 20-year-old Daniel Moreno-Gama after he showed up outside OpenAI's headquarters and threatened to burn the building.
- [4]Suspect in Molotov cocktail attack on Sam Altman's San Francisco home identifiednbcbayarea.com
Daniel Alejandro Moreno-Gama was identified as the suspect in the early morning attack on the OpenAI CEO's Russian Hill home.
- [5]Molotov suspect who attacked Sam Altman's home was likely a Pause AI follower with AI extinction fearsthe-decoder.com
On Discord, he used the name 'Butlerian Jihadist' and wrote 'A Eulogy for Man' warning of AI-driven extinction. He was an active PauseAI member who wrote: 'We are close to midnight, it's time to actually act.'
- [6]Suspect In Sam Altman Molotov Cocktail Attack Charged With Attempted Murdersfist.com
Moreno-Gama was charged with attempted murder, arson, criminal threats, two counts of possession of an incendiary device, and two counts of possessing a destructive device with intent to injure.
- [7]OpenAI CEO Sam Altman's House Targeted In Molotov Attack, Suspect Held Without Bailgoodreturns.in
Daniel Moreno-Gama is being held without bail following the firebomb attack on Altman's residence.
- [8]We need a Butlerian Jihad against AItheintrinsicperspective.com
Neuroscientist Erik Hoel's widely read essay arguing for a 'Butlerian Jihad' against AI development, drawing on Dune's anti-machine mythology.
- [9]We Must Declare Jihad Against A.I.compactmag.com
Compact Magazine piece using the Dune-derived 'Butlerian Jihad' framing to argue against advanced AI development.
- [10]AI Extinction Statement Press Releasesafe.ai
Statement signed by over 500 AI researchers and industry leaders: 'Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.'
- [11]AI Is as Risky as Pandemics and Nuclear War, Top CEOs Saytime.com
Open letter calling on AI labs to pause training systems more powerful than GPT-4, signed by tens of thousands including Elon Musk and Yoshua Bengio.
- [12]OpenAlex: AI Existential Risk Research Publicationsopenalex.org
Over 20,900 academic papers on AI existential risk, with publications peaking at 6,689 in 2025.
- [13]StopAI Cofounder's Disappearance—and the Warning Signs of Radicalizationcity-journal.org
Sam Kirchner, StopAI co-founder, disappeared after assaulting a member and making threats against AI company employees. The article argues StopAI provides 'ideological gloss to abstract visions of extreme violence.'
- [14]Can you fight the AI apocalypse without losing your mind?sfstandard.com
Investigation into Sam Kirchner's disappearance and the radicalization risks within anti-AI activist communities.
- [15]Attack on Altman home prompts new fears: Is the AI backlash getting dangerous?sfstandard.com
The attack ignited new worries among local tech leaders about growing AI fears manifesting into threats or violence. StopAI denied involvement and condemned all violence.
- [16]UnitedHealthcare CEO Brian Thompson killing ushers in new era of executive security fearsfortune.com
DHS warned that 'individuals mobilized by economic grievances are using the murder of a health insurance CEO as inspiration for threats and attack plotting.'
- [17]How Executive Protection is Changing One Year After UnitedHealthcare CEO Attackasisonline.org
Threats against high-profile executives are surging, with tech companies seeing a 73.5% jump in security measures from 2020 to 2024.
- [18]Banks spend more on CEO security after 2024 slaying in NYCamericanbanker.com
Wells Fargo, Capital One, American Express, and Goldman Sachs all increased CEO security budgets in 2025 following the Thompson assassination.
- [19]How companies are protecting their top executives after the killing of UnitedHealthcare CEOfortune.com
Companies prioritizing travel security (54%), physical security (53%), cybersecurity (39%), and residential security (38%).
- [20]Risk Analyst, Corporate Security at OpenAItechnyc.org
OpenAI job posting for risk analyst to identify physical security threats and support executive protection programs.
- [21]A brief guide to the groups protesting over AItransformernews.ai
PauseAI US stated it 'does not work with StopAI and has not since StopAI was founded,' emphasizing commitment to nonviolence.
- [22]Reasoning through arguments against taking AI safety seriouslyyoshuabengio.org
Yoshua Bengio argues that the risks of advanced AI are real and that dismissing them is irresponsible.
- [23]Sam Altman responds to 'incendiary' New Yorker article after attack on his hometechcrunch.com
Altman published a blog post responding to both the attack and a New Yorker profile, saying 'I have underestimated the power of words and narratives.'
- [24]Sam Altman shares family photo after Molotov attack, warns against rising AI hostilitybusinesstoday.in
Altman acknowledged 'people's fear of AI is valid' and called for de-escalation: 'fewer explosions in fewer homes, figuratively and literally.'
- [25]Sam Altman Confirms Molotov Cocktail Incident and Responds to 'Incendiary' New Yorker Investigationhollywoodreporter.com
The New Yorker investigation, by Ronan Farrow and Andrew Marantz, drew on interviews with more than 100 people and described Altman as having 'a relentless will to power.'
- [26]Radicalized Anti-AI Activist Should Be A Wake-Up Call For Doomer Rhetoricaipanic.news
Analysis arguing that extremist anti-AI rhetoric creates conditions for radicalization and real-world violence.
Sign in to dig deeper into this story
Sign In