All revisions

Revision #1

System

about 4 hours ago

The Mythos Paradox: How a Cybersecurity Scare Pushed the White House to Send Wall Street Straight to Anthropic

On April 7, 2026, the CEOs of five of the largest banks in the United States received short-notice invitations to the Treasury Department's headquarters in Washington. The host: Treasury Secretary Scott Bessent. The co-convener: Federal Reserve Chair Jerome Powell. The agenda: a single AI model built by a company the administration was simultaneously trying to ban from government work [1][2].

The model is Claude Mythos Preview, built by Anthropic. According to the company, Mythos can autonomously identify and exploit zero-day vulnerabilities — previously unknown software flaws — in every major operating system and every major web browser [3]. Anthropic says it discovered thousands of such vulnerabilities in a matter of weeks, including a 27-year-old flaw in OpenBSD and a 16-year-old bug in FFmpeg [4].

The message from Bessent and Powell was direct: banks should run Mythos against their own systems to find and patch weaknesses before hostile actors gain access to equivalent capabilities [1][2].

Who Was in the Room

Bank of America CEO Brian Moynihan, Citigroup CEO Jane Fraser, Goldman Sachs CEO David Solomon, Morgan Stanley CEO Ted Pick, and Wells Fargo CEO Charlie Scharf attended the April 7 meeting [2]. JPMorgan Chase, the largest U.S. bank by assets, was already a founding partner of Anthropic's Project Glasswing initiative and had early access to the model [3][4].

Bloomberg reported on April 10 that Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley had begun testing Mythos internally in the days following the meeting [1]. No bank has publicly confirmed or denied participation on the record. A Treasury spokesperson confirmed the meeting occurred and said the administration "plans additional meetings with regulators and institutions on an ongoing basis addressing AI and related issues" [5].

The White House framed the effort as part of broader national security coordination. A statement read: "President Trump and the Administration are continuing to engage on AI security in a thoughtful manner. The White House has been leading an ongoing core interagency taskforce, which includes the Treasury, that has been proactively engaging across the government and industry" [5].

What Mythos Actually Does

Mythos is not a general-purpose banking AI. It is a cybersecurity tool — specifically, an offensive security model repurposed for defense. On CyberGym benchmarks, Mythos Preview scores 83.1% on vulnerability detection, compared to 66.6% for Claude Opus 4.6, Anthropic's previous flagship [4]. The model operates autonomously to discover zero-day flaws, develop working exploits, and generate remediation guidance [4].

Anthropic briefed U.S. government officials about Mythos's capabilities at least a month before its public announcement on April 7 [6]. The company chose not to release the model publicly, saying it was too dangerous. Instead, it created Project Glasswing: a controlled-access program pairing Mythos with 12 founding organizations — Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks — plus over 40 additional organizations that maintain critical infrastructure [4].

Anthropic committed $100 million in usage credits for Glasswing participants, along with $2.5 million to the Linux Foundation's Alpha-Omega and OpenSSF programs and $1.5 million to the Apache Software Foundation [4]. Post-preview pricing is set at $25 per million input tokens and $125 per million output tokens [4].

AI Adoption in U.S. Banking (% of institutions)

The Contradiction: Banning Anthropic While Boosting It

The most striking aspect of this episode is the contradiction within the administration's own posture toward Anthropic.

In February 2026, President Trump directed every federal agency to "immediately cease" all use of Anthropic's technology [7]. Defense Secretary Pete Hegseth designated the company a "supply-chain risk to national security" — a label historically reserved for foreign adversaries like Huawei [8][9]. The dispute originated in a $200 million Pentagon contract from 2025. During deployment negotiations, the Department of Defense sought unfettered access to Anthropic's models for all lawful purposes; Anthropic wanted assurances its technology would not be used for fully autonomous weapons or domestic mass surveillance [8][9].

On March 26, U.S. District Judge Rita Lin in San Francisco blocked the Pentagon's supply-chain risk designation, writing that "punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation" [10]. But on April 8 — one day after the Bessent-Powell banking meeting — a federal appeals court in Washington, D.C. reversed that order, denying Anthropic's request to temporarily halt the blacklisting while the case proceeds [11]. Oral arguments are scheduled for May 19 [11].

Meanwhile, OpenAI secured a deployment agreement on Defense Department classified networks, filling the vacuum Anthropic's exclusion created [7].

The result is a federal government that, in the same week, told Wall Street banks to adopt Anthropic's model for cyber defense while maintaining a legal effort to ban the company from government work entirely. No official has publicly addressed this tension.

Fair Competition and the Favoritism Question

The government directing banks to test a specific company's product raises fair-competition concerns. Anthropic's competitors — OpenAI, Google, and Microsoft — all offer cybersecurity capabilities. Microsoft, through its Security Copilot product line, has the deepest existing penetration in enterprise security. Google's Mandiant division is among the most established threat intelligence providers.

Yet the Bessent-Powell meeting specifically cited Anthropic's Mythos, not a class of AI tools or a competitive evaluation framework. For banks weighing compliance with regulatory expectations, a direct request from the Treasury Secretary and Fed Chair carries implicit weight — even if characterized as voluntary.

The banking sector has long understood that regulators' informal guidance often functions as soft mandate. Institutions seeking favorable treatment in examinations, merger approvals, or enforcement actions have strong incentives to demonstrate responsiveness to regulatory preferences. Whether the Mythos testing push constitutes genuine optionality or carries implicit coercive pressure depends on how regulators follow up — a question no one has yet answered publicly.

AI Industry Federal Lobbying Spending ($M)
Source: OpenSecrets
Data as of Jan 1, 2026CSV

Anthropic itself has spent $4.94 million on federal lobbying since entering the space in 2023, with $1.01 million in the third quarter of 2025 alone [12]. The company lobbied on AI export controls, NIST safety standards, and data center permitting [12]. Its super PAC, Public First, received a $20 million donation from Anthropic, with co-founder Jack Clark signaling the contribution would be ongoing [13].

The Regulatory Framework: SR 11-7 and Its Limits

Any AI model deployed in banking falls under the Federal Reserve and OCC's Supervisory Guidance on Model Risk Management, known as SR 11-7, issued in 2011 and adopted by the FDIC in 2017 [14]. The guidance requires banks to ensure all models are "well understood, tested, and monitored throughout their lifecycle" and establishes that model risk increases with "complexity, uncertainty, breadth of use, and potential impact" [14].

SR 11-7 was written for traditional statistical models — logistic regressions, credit scoring algorithms, stress-testing frameworks. Large language models like Mythos, which operate as general-purpose reasoning systems rather than narrow statistical tools, strain these categories. A model that autonomously discovers zero-day exploits and generates working attack code occupies a different risk profile than a credit scorecard.

For cybersecurity applications specifically, the more relevant framework is the NIST Cybersecurity Framework, which in 2026 harmonized more than 2,500 regulatory expectations from the Fed, OCC, and FDIC into a unified set of diagnostic statements [15]. But neither SR 11-7 nor the NIST framework addresses the specific scenario of a regulator encouraging banks to adopt a particular vendor's tool.

If Mythos were deployed beyond cybersecurity — for credit underwriting, fraud detection, or compliance monitoring — additional regulatory layers would apply. The Equal Credit Opportunity Act and Fair Housing Act prohibit lending practices that produce discriminatory outcomes, even if unintentional [16]. The Consumer Financial Protection Bureau has emphasized that institutions must provide specific reasons for adverse credit decisions made using AI [16]. Under existing law, liability for discriminatory outcomes rests with the financial institution, not the algorithm or its vendor [16].

The Cybersecurity Case: Why Proponents Say This Is Necessary

Supporters of the government's push argue the urgency is real. Logan Graham of Anthropic warned that comparable vulnerability-discovery capabilities could be "broadly distributed or made broadly available" within six to twelve months, including from Chinese competitors [6]. If that timeline is accurate, banks that have not stress-tested their systems against Mythos-class threats face significant exposure.

Casey Ellis, founder of Bugcrowd, summarized the asymmetry: "A defender needs to be right all the time, whereas an attacker only needs to be right once" [6]. Cynthia Kaiser, a former FBI cyber official, warned that AI would enable less-skilled attackers to target hospitals and critical infrastructure with ransomware [6]. Jason Healey of Columbia University suggested AI-driven cyberattacks on U.S. infrastructure had become more feasible for nations like Iran [6].

The banking sector is a high-value target. Financial institutions hold trillions in assets and process millions of transactions daily. A successful zero-day exploit against a major bank's core systems could trigger cascading effects across the financial system. From this perspective, the Bessent-Powell meeting was not commercial favoritism but an emergency briefing — analogous to government notifications about specific threat intelligence.

The Bank of England scheduled its own discussion with UK banks about Mythos within days of the U.S. meeting, and Canadian regulators initiated similar consultations, suggesting the threat assessment is shared internationally [17].

Federal Funds Effective Rate
Source: FRED / Federal Reserve Board
Data as of Mar 1, 2026CSV

The Skeptics: Overhyped Threat or Strategic Marketing?

Not everyone accepts Anthropic's framing at face value. Spencer Whitman of Gray Swan Security noted that smaller, openly available AI models can already replicate several of the vulnerabilities Anthropic highlighted, though they require researchers to pre-identify vulnerable code segments rather than operating fully autonomously [6]. The distinction between "can find vulnerabilities with human guidance" and "finds them autonomously" is significant — but Anthropic's claims about full autonomy remain difficult to independently verify because the model is not publicly available.

Katie Moussouris, CEO of Luta Security, raised a different concern: the prospect of major cloud outages with downstream industry effects, drawing parallels to the CrowdStrike incident of 2024 [6].

Several security researchers have pointed out the structural incentive at play. Anthropic built a product it says is too dangerous to release, then created a controlled-access program that positions itself as the indispensable gatekeeper. The more alarming the threat narrative, the more valuable the exclusive access becomes. This does not mean the threat is fabricated — but it does mean the company has a financial interest in the threat being perceived as severe as possible.

Liability in Uncharted Territory

If banks adopt Mythos at government encouragement and the model produces errors — missing a real vulnerability, generating a false positive that disrupts operations, or inadvertently creating an exploit that leaks — the liability question is unresolved.

Under current law, the bank bears primary responsibility for its own cybersecurity posture. Vendor agreements typically include limitation-of-liability clauses that cap Anthropic's exposure. The government officials who encouraged adoption have no formal liability for informal recommendations.

This creates a gap: the entity with the most influence over the decision (the government) bears the least legal exposure for its consequences. If a Mythos-guided security audit misses a critical vulnerability that is later exploited, the bank faces regulatory scrutiny for its own risk management failures — regardless of whether it was following the Treasury Secretary's informal guidance.

The question extends to Anthropic's Glasswing partners. JPMorgan Chase, as a founding member, has publicly stated its involvement aims at "promoting the cybersecurity and resiliency of the financial system" [4]. But that public commitment also creates an implicit endorsement — and potential reputational risk if Mythos underperforms its billing.

The Broader Political Context

This episode sits within a larger pattern of the Trump administration's approach to AI governance. On March 20, 2026, the White House released its National Policy Framework for Artificial Intelligence, recommending that Congress avoid creating a federal AI rulemaking body and instead support sector-specific AI applications through existing regulators [18]. An executive order directed Attorney General Pam Bondi to establish an AI Litigation Task Force to challenge state AI regulations [19].

The framework's emphasis on industry-led adoption over prescriptive regulation aligns with the Mythos push: rather than issuing formal guidance on AI in cybersecurity, the administration convened a meeting and made a suggestion. This approach offers flexibility but lacks transparency, accountability, and the procedural safeguards of formal rulemaking.

Meanwhile, AI companies' political spending continues to escalate. Super PAC networks backed by OpenAI insiders and Anthropic are spending over $125 million in the 2026 midterms [13]. The line between technology policy and commercial competition has become difficult to locate.

S&P 500 Index
Source: FRED / S&P Dow Jones Indices
Data as of Apr 10, 2026CSV

What Happens Next

The immediate question is whether the Mythos testing push remains limited to cybersecurity or expands to other banking functions. If banks find the model effective at identifying vulnerabilities, pressure will build to apply similar AI capabilities to fraud detection, compliance monitoring, and eventually credit decisions — each of which carries distinct regulatory and ethical risks.

The appeals court hearing on Anthropic's Pentagon blacklisting, scheduled for May 19, will determine whether the company can simultaneously serve as Wall Street's cybersecurity partner and remain banned from government work [11]. The outcome will signal whether the administration's contradictory posture toward Anthropic reflects genuine policy complexity or factional disagreement within the government.

For the banking industry, the episode highlights a structural tension: regulators are encouraging adoption of cutting-edge AI tools while the regulatory frameworks governing those tools remain rooted in pre-AI assumptions. SR 11-7's model risk management requirements, however robust for traditional models, were not designed for autonomous systems that can independently discover and exploit software vulnerabilities [14].

The gap between AI capability and regulatory capacity is not new. But the spectacle of a Treasury Secretary and Fed Chair personally directing bank CEOs to test a specific company's product — while that company fights the same administration in federal court — makes the gap harder to ignore.

Sources (19)

  1. [1]
    US Urges Wall Street Banks to Test Anthropic's Mythos AI Modelbloomberg.com

    Wall Street banks are starting to test Anthropic PBC's Mythos model internally as Trump administration officials encourage them to use it to detect vulnerabilities.

  2. [2]
    Powell, Bessent discussed Anthropic's Mythos AI cyber threat with major U.S. bankscnbc.com

    Treasury Secretary Bessent and Federal Reserve Chair Powell assembled a group of banking executives on April 7 at Treasury's headquarters in Washington on short notice.

  3. [3]
    Anthropic debuts preview of powerful new AI model Mythos in new cybersecurity initiativetechcrunch.com

    Anthropic rolled out Claude Mythos Preview in limited form, saying the model can identify and exploit zero-day vulnerabilities in every major operating system and browser.

  4. [4]
    Project Glasswing: Securing critical software for the AI eraanthropic.com

    Project Glasswing pairs Claude Mythos Preview with 12 founding organizations. Anthropic commits $100M in usage credits and $4M to open-source security organizations.

  5. [5]
    White House Tells Banks to Use Anthropic to Spot Vulnerabilitiespymnts.com

    A Treasury spokesperson confirmed the Trump administration plans additional meetings with regulators and institutions on an ongoing basis addressing AI and related issues.

  6. [6]
    The 'Vulnpocalypse': Why experts fear AI could tip the scales toward hackersnbcnews.com

    Logan Graham of Anthropic warns comparable capabilities could be broadly available within 6-12 months. Experts debate the asymmetry between cyber defense and offense.

  7. [7]
    OpenAI announces Pentagon deal after Trump bans Anthropicnpr.org

    President Trump directed every federal agency to immediately cease all use of Anthropic's technology. OpenAI secured a deployment agreement on DoD classified networks.

  8. [8]
    Anthropic sues Trump administration over Pentagon blacklistcnbc.com

    The DOD designated Anthropic a supply-chain risk to national security — a label historically reserved for foreign adversaries — over a $200M contract dispute.

  9. [9]
    Anthropic sues the Trump administration over 'supply chain risk' labelnpr.org

    Anthropic is the first American company to publicly be named a supply chain risk, as the designation has historically been reserved for foreign adversaries.

  10. [10]
    Judge blocks Pentagon's effort to 'punish' Anthropic by labeling it a supply chain riskcnn.com

    Judge Rita Lin: 'Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation.'

  11. [11]
    Anthropic loses appeals court bid to temporarily block Pentagon blacklistingcnbc.com

    A federal appeals court denied Anthropic's request to temporarily halt the DOD's blacklisting, but set oral arguments for May 19 citing likely irreparable harm.

  12. [12]
    Anthropic PBC Lobbying Profileopensecrets.org

    Anthropic PBC has filed 36 total lobbying disclosures since 2023, with cumulative spending of $4.94 million. Q3 2025 spending was $1.01 million.

  13. [13]
    OpenAI vs Anthropic - The $125M Battle for Congressawesomeagents.ai

    Super PAC networks backed by OpenAI insiders and Anthropic are pouring over $125 million into the 2026 midterms. Anthropic confirmed a $20M donation to Public First.

  14. [14]
    SR 11-7 on guidance on Model Risk Managementfederalreserve.gov

    The Federal Reserve's 2011 supervisory guidance requires banks to ensure all models are well understood, tested, and monitored throughout their lifecycle.

  15. [15]
    Banks get new federal guidance on AI cyber risksamericanbanker.com

    NIST framework harmonizes more than 2,500 regulatory expectations from the Federal Reserve, OCC and FDIC into a concise set of diagnostic statements.

  16. [16]
    AI in Lending: AI Credit Regulations Affecting Lendinghesfintech.com

    ECOA and FHA prohibit discriminatory lending practices even if unintentional. Liability for AI-driven discriminatory outcomes rests with the financial institution.

  17. [17]
    Bank of England Set to Discuss Anthropic's Mythos With Banksbloomberg.com

    Authorities in the US, Canada and the UK are moving quickly to assess risks and shore up bank defenses against potential cyber threats from advanced AI.

  18. [18]
    Trump Administration Releases National AI Policy Frameworkmofo.com

    The administration released its National Policy Framework for AI on March 20, 2026, recommending sector-specific regulation through existing bodies rather than a new federal rulemaking agency.

  19. [19]
    Trump issues executive orders to challenge state AI lawsbankingjournal.aba.com

    The order directs AG Pam Bondi to establish an AI Litigation Task Force to challenge state laws regulating AI and potentially withhold broadband funding from states with 'onerous' AI laws.