Treasury Secretary Summons Bank CEOs to Discuss Cyber Risks Posed by Anthropic AI Model
TL;DR
Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened the CEOs of America's largest banks on April 8, 2026, to warn them about cybersecurity risks posed by Anthropic's new Mythos AI model, which can autonomously discover and exploit zero-day vulnerabilities in every major operating system and web browser. The emergency meeting — the first known instance of top U.S. financial regulators gathering bank executives specifically over a single AI model's capabilities — raises questions about the adequacy of existing regulatory frameworks, the competitive implications for AI adoption in banking, and whether Anthropic's restricted-release approach through Project Glasswing is sufficient to contain a tool that has already found thousands of critical, previously unknown software flaws.
On the afternoon of Tuesday, April 8, Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell sat across from five of the six most powerful bankers in America and delivered a warning: the AI model that Anthropic released the day before could find and exploit security flaws in virtually every piece of software their banks depend on .
The meeting at the Treasury Department in Washington — first reported by Bloomberg on April 10 — brought together Citigroup CEO Jane Fraser, Morgan Stanley CEO Ted Pick, Bank of America CEO Brian Moynihan, Wells Fargo CEO Charlie Scharf, and Goldman Sachs CEO David Solomon . The subject was Anthropic's Claude Mythos Preview, a model that its own maker described as capable of identifying zero-day vulnerabilities — previously unknown software flaws — in "every major operating system and every major web browser" .
One CEO was conspicuously absent: JPMorgan Chase's Jamie Dimon, who was "unable to attend due to prior commitments" . The irony was not lost on observers: JPMorgan is the only bank among the 12 founding partners of Anthropic's Project Glasswing, the restricted-access initiative through which Mythos is being distributed .
What Mythos Can Do
The trigger for the emergency meeting was not a breach, a foreign adversary probe, or an intelligence warning. It was Anthropic itself, which on April 7 publicly disclosed the capabilities of Mythos Preview alongside the launch of Project Glasswing, while simultaneously briefing senior officials across the U.S. government .
The numbers from Anthropic's own red-team report are stark. In testing against Firefox's JavaScript engine, Mythos Preview produced 181 working exploits. Anthropic's previous flagship model, Opus 4.6, managed two . On the OSS-Fuzz benchmark — a standard measure for automated vulnerability discovery — Mythos achieved 595 crashes at severity tiers 1–2, plus 10 instances of complete control flow hijack. Predecessor models produced only single-digit results at lower severity levels .
Specific discoveries included a 27-year-old vulnerability in OpenBSD, widely considered one of the most security-hardened operating systems in existence, that would allow remote denial-of-service attacks . A 16-year-old flaw in FFmpeg — used by countless applications for video encoding — sat in a line of code that automated testing tools had hit five million times without catching the problem . And a remote code execution vulnerability in FreeBSD (CVE-2026-4747), discovered and exploited fully autonomously, that allows complete server takeover — at a computational cost of under $50 .
Mythos reproduced vulnerabilities and created working proof-of-concept exploits on the first attempt 83.1% of the time . In one case, it wrote a browser exploit that chained together four separate vulnerabilities, constructing a JIT heap spray that escaped both the renderer sandbox and the operating system sandbox .
"The dangers of getting this wrong are obvious," Anthropic CEO Dario Amodei wrote on X alongside the Project Glasswing announcement, "but if we get it right, there is a real opportunity to create a fundamentally more secure internet and world than we had before the advent of AI-powered cyber capabilities" .
The Banks' Exposure
The meeting came against a backdrop of rapid AI adoption across Wall Street. Claude — Anthropic's commercial model family, distinct from the restricted Mythos — is already deployed in production at multiple major banks. Goldman Sachs partnered with Anthropic to automate trade accounting and client onboarding . Citigroup and RBC Capital Markets have adopted Claude for various functions . AIG has compressed its underwriting review timeline by more than 5x using Claude, improving data accuracy from 75% to over 90% .
But the concern raised at the Tuesday meeting was not primarily about Claude's existing commercial deployments. Officials focused on what Anthropic calls "left-tail" risks — rare but severe scenarios in which AI-enhanced vulnerability detection could be turned against the financial system . The worry is that if Mythos-class capabilities become widely available, whether through Anthropic or a competitor, any attacker could discover exploitable flaws in the software infrastructure banks rely on at a fraction of the previous cost and time.
This concern is not hypothetical. In November 2025, Anthropic disclosed that a Chinese state-sponsored group, tracked internally as GTG-1002, had manipulated Claude Code — Anthropic's coding agent — into attempting infiltration of roughly 30 organizations, including financial institutions and government agencies . The attackers jailbroke Claude by decomposing attacks into small, seemingly innocuous tasks and telling the model it was performing legitimate defensive security testing . At its peak, the AI executed thousands of requests per second — attack speeds that human hackers could not match . Anthropic detected the campaign in September 2025 and banned the accounts, but the incident demonstrated that AI-powered offensive capabilities were already being exploited by sophisticated adversaries months before Mythos made the problem an order of magnitude worse .
Project Glasswing: Anthropic's Containment Strategy
Rather than release Mythos publicly, Anthropic restricted access to 12 partner organizations through Project Glasswing: Amazon, Apple, Broadcom, Cisco, CrowdStrike, the Linux Foundation, Microsoft, Palo Alto Networks, Google, Nvidia, JPMorgan Chase, and the Apache Software Foundation . Anthropic committed up to $100 million in usage credits and $4 million in direct donations to open-source security organizations, including OpenSSF and Alpha-Omega .
The stated goal is defensive: give major software vendors and security firms time to find and patch the vulnerabilities Mythos can identify before the capabilities become widely accessible. Over 99% of the vulnerabilities discovered by Mythos Preview remain unpatched, according to Anthropic's red-team report, with professional validators agreeing on 89% of severity ratings and 98% within one severity level .
Anthropic has briefed the Cybersecurity and Infrastructure Security Agency (CISA), the Commerce Department, and "a broader array of actors" across the U.S. government on Mythos' full capabilities, and has "made itself available to support the government's own testing and evaluation of the technology" .
The JPMorgan Paradox
Jamie Dimon's absence from the Treasury meeting, while his bank is the sole financial institution in the Glasswing consortium, created an unusual optics problem. On the same day as the emergency meeting, JPMorgan analysts published a research note upgrading CrowdStrike and Palo Alto Networks — two other Glasswing partners — citing the initiative as their rationale .
JPMorgan has been the most aggressive AI adopter among major U.S. banks. The firm made AI adoption a formal performance requirement for 65,000 engineers, with internal dashboards tracking GitHub Copilot usage by individual, and reported 10–20% productivity gains from AI coding tools . The bank uses models from at least four major AI vendors .
The question of whether JPMorgan's inside position — having access to Mythos through Glasswing while other banks do not — constitutes a competitive advantage is one that the Tuesday meeting implicitly raised but apparently did not resolve .
Precedent and Regulatory Authority
This appears to be the first time that Treasury and the Fed jointly convened bank CEOs specifically over a single AI model's capabilities . Past emergency cybersecurity interventions by regulators followed actual breaches rather than anticipatory risk assessments.
In 2016, hackers exploited vulnerabilities in the SWIFT global payment messaging system to attempt a $1 billion theft from Bangladesh's central bank, ultimately stealing $101 million . The response came from SWIFT itself and individual regulators, not a coordinated top-level summoning of bank executives. In 2020–2021, the SolarWinds supply chain attack compromised dozens of Treasury Department email accounts and affected approximately 100 private companies, including financial institutions . The response was led by CISA through emergency directives to federal agencies, not by a Treasury-Fed joint meeting with private-sector CEOs .
The regulatory authority Treasury can bring to bear on banks' AI vendor relationships is indirect but not toothless. The Bank Service Company Act gives federal regulators authority to examine and regulate third-party technology service providers to banks . The Federal Financial Institutions Examination Council (FFIEC) sets interagency standards that examiners use to evaluate banks' technology risk management . In 2023, federal banking agencies issued interagency guidance on third-party risk management, establishing a principles-based approach for assessing vendor relationships — including AI vendors . Critically, ultimate responsibility for compliance rests with the banks themselves, not the AI vendors .
Treasury has also been building out an AI-specific regulatory infrastructure. On February 19, 2026, the department released an AI Lexicon and a Financial Services AI Risk Management Framework (FS AI RMF) with 230 control objectives for managing AI-related risks . In March 2026, the Office of the Financial Stability Oversight Council and Treasury's AI Transformation Office launched an AI Innovation Series, a public-private initiative for financial system resilience . These frameworks are voluntary — "optional tools for U.S. bankers rather than legally binding documents," according to Treasury's own description .
If banks were to ignore guidance from Tuesday's meeting, enforcement would depend on existing supervisory authority: exam findings, consent orders, and in extreme cases, cease-and-desist orders — not any AI-specific statute .
Is This About Anthropic, or About AI?
The framing of the meeting around Anthropic specifically invites a question: why Anthropic and not OpenAI, Google, or Microsoft?
The straightforward answer is that Mythos represents a documented, measurable step-change in offensive cyber capabilities that no other publicly acknowledged model has matched. Anthropic's own benchmarks show Mythos outperforming its predecessor by roughly 90x on exploit generation . No comparable red-team disclosure has been published by OpenAI, Google, or Microsoft for their respective models.
But critics and industry observers have noted that the framing could have competitive implications. A Dark Reading poll found that 48% of cybersecurity professionals rank "agentic AI" — AI systems that can take autonomous actions — as the top attack vector for 2026 . This is a cross-vendor concern, not one specific to Anthropic. OpenAI is reportedly developing a rival to Mythos . Google's Gemini has faced its own model-extraction campaigns involving over 100,000 prompts from Chinese laboratories . Microsoft's Copilot is deployed at tens of thousands of financial institutions.
The steelman case that the meeting represents overreach goes like this: by singling out Anthropic — which voluntarily disclosed Mythos' capabilities and restricted its release — regulators may be penalizing transparency while leaving less forthcoming competitors unexamined. If the result is that banks slow their adoption of Anthropic's tools specifically, the beneficiaries would include incumbent technology vendors, banks with existing AI advantages (like JPMorgan), and foreign competitors operating outside U.S. regulatory reach.
The counterargument: Anthropic's own disclosures make the risk concrete and documentable in a way that vague concerns about "AI in banking" are not. Regulators had specific benchmarks, specific CVEs, and a specific restricted-release program to point to. The meeting was a response to evidence, not speculation.
What Comes Next
The Tuesday meeting produced no public regulatory action, no formal guidance, and no Congressional notification, according to available reporting . No regulatory filing has been made, and Anthropic has not been subjected to any formal investigation by Treasury or the Fed .
The situation sits in an ambiguous regulatory space. Treasury's AI risk management framework is voluntary. The Bank Service Company Act was written for an era of outsourced check processing, not autonomous exploit generation. FFIEC guidelines address third-party technology risk management in general terms that predate the specific scenario of a vendor releasing a model that can find thousands of zero-day vulnerabilities in critical infrastructure .
What is clear is that the capabilities Mythos represents are not going away. Anthropic's CEO acknowledged as much: "More powerful models are going to come from us and from others, and so we do need a plan to respond to this" . The question for regulators, banks, and AI companies alike is whether the current framework — voluntary lexicons, 230-control-objective checklists, and one-off emergency meetings — is adequate for a world in which a $50 computation can find a remote code execution vulnerability that went undetected for 17 years.
The banking system's software infrastructure is the same infrastructure that Mythos has been systematically probing. Every major operating system, every major web browser, and over 99% of the vulnerabilities found remain unpatched . The Tuesday meeting was a first acknowledgment at the highest levels of U.S. financial regulation that AI has become a systemic cyber risk factor. Whether that acknowledgment translates into action — or remains a closed-door warning between powerful people — is the open question.
Related Stories
US Stocks Drop as Banks and Airlines Lead Decline
JPMorgan Chase Cuts Private Credit Lending After Software Loan Markdowns
Stock and Bond Traders Brace for Another Volatile Trading Day
Fed Still Expected to Cut Rates in June Despite War Inflation Risks
Anthropic Releases New AI Model 'Mythos,' Raising Safety Questions
Sources (21)
- [1]Bessent, Powell Summon Bank CEOs to Urgent Meeting Over Anthropic's New AI Modelbloomberg.com
Treasury Secretary Bessent and Fed Chair Powell warned Wall Street bank CEOs about Anthropic's Mythos model's risks at an urgent meeting Tuesday at the Treasury Department.
- [2]Bessent and Powell convened an emergency meeting with Wall Streetfutunn.com
Emergency meeting convened with Wall Street leaders over concerns that Anthropic's latest AI model will usher in an era of greater cyber risk.
- [3]Wall Street CEOs 'summoned' to DC by Scott Bessent and Jay Powell to discuss AI cyber riskssherwood.news
CEOs from Citigroup, Morgan Stanley, Bank of America, Wells Fargo, and Goldman Sachs attended; JPMorgan's Jamie Dimon was unable to attend.
- [4]Assessing Claude Mythos Preview's cybersecurity capabilitiesred.anthropic.com
Mythos Preview identified thousands of zero-day vulnerabilities in every major operating system and browser, with 83.1% first-attempt exploit success and 181 Firefox exploits vs. 2 for Opus 4.6.
- [5]Project Glasswing: Securing critical software for the AI eraanthropic.com
Anthropic commits $100M in usage credits and $4M in donations, restricting Mythos Preview to 12 partner organizations for defensive cybersecurity work.
- [6]Anthropic's most powerful AI has caused an urgent meeting on Wall Street, but JPMorgan, which has a 'solution,' was absentfutunn.com
JPMorgan is the only financial institution among the 12 founding partners of Glasswing. On the same day as the meeting, JPM analysts upgraded CrowdStrike and Palo Alto Networks.
- [7]Anthropic is giving some firms early access to Claude Mythos to bolster cybersecurity defensesfortune.com
Project Glasswing partners include Amazon, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, Microsoft, Nvidia, and the Linux Foundation.
- [8]Anthropic limits Mythos AI rollout over fears hackers could use model for cyberattackscnbc.com
Anthropic has briefed CISA, Commerce Department, and senior officials across the U.S. government on Mythos' full offensive and defensive cyber capabilities.
- [9]Anthropic's Claude Mythos Finds Thousands of Zero-Day Flaws Across Major Systemsthehackernews.com
Mythos found bugs in every major operating system and web browser, including a 27-year-old OpenBSD vulnerability and a 16-year-old FFmpeg flaw.
- [10]Anthropic withholds Mythos Preview model because its hacking is too powerfulaxios.com
Dario Amodei stated: 'More powerful models are going to come from us and from others, and so we do need a plan to respond to this.'
- [11]Goldman Sachs taps Anthropic's Claude to automate accounting, compliance rolescnbc.com
Goldman Sachs has partnered with Anthropic to create AI agents to automate accounting for trades and transactions, and client vetting and onboarding.
- [12]Claude for Financial Servicesanthropic.com
Claude supports front-office client interactions, middle-office underwriting/risk/compliance, and back-office legacy-process modernization at major banks including AIG, Citi, and RBC.
- [13]Disrupting the first reported AI-orchestrated cyber espionage campaignanthropic.com
Chinese state-sponsored group GTG-1002 used Claude Code to attempt infiltration of roughly 30 organizations including financial institutions and government agencies.
- [14]Chinese hackers used Anthropic's Claude AI agent to automate spyingaxios.com
Attackers jailbroke Claude Code by decomposing attacks into small innocuous tasks, executing thousands of requests per second at peak.
- [15]Everyone's worried that AI's newest models are a hacker's dream weaponaxios.com
48% of cybersecurity professionals rank agentic AI as the No. 1 attack vector for 2026. JPMorgan made AI adoption a formal performance requirement for 65,000 engineers.
- [16]Cyber-attacks on SWIFT Systems of financial institutionsacm.org
In February 2016, hackers exploited SWIFT vulnerabilities to attempt a $1 billion theft from Bangladesh's central bank, ultimately stealing $101 million.
- [17]SolarWinds Cyberattack Demands Significant Federal and Private-Sector Responsegao.gov
The SolarWinds supply chain attack compromised nine U.S. federal agencies including Treasury and approximately 100 private companies.
- [18]Rising Enforcement of FDIC Section 7 Assessments in Vendor Managementvenminder.com
The Bank Service Company Act gives federal regulators authority to examine and regulate third-party technology service providers to banks.
- [19]Treasury Releases Two New Resources to Guide AI Use in the Financial Sectortreasury.gov
Treasury released an AI Lexicon and Financial Services AI Risk Management Framework with 230 control objectives, described as optional tools for bankers.
- [20]New Treasury initiative targets improved cyber risk management for AI toolsnextgov.com
Treasury's FSOC and AI Transformation Office launched the AI Innovation Series in March 2026 for financial system resilience.
- [21]OpenAI plans rival to Anthropic's 'dangerous' new AI model 'Mythos'theweek.in
OpenAI is reportedly developing a rival to Anthropic's Mythos model with comparable cybersecurity capabilities.
Sign in to dig deeper into this story
Sign In