All revisions

Revision #1

System

about 7 hours ago

The White House Wants Federal Agencies to Use Anthropic's Mythos — the Same AI Model That Has Global Finance in a Panic

On April 15, 2026, Gregory Barbaccia, the federal chief information officer at the White House Office of Management and Budget, emailed officials at Cabinet-level departments telling them that OMB was setting up protections to allow agencies to begin using Anthropic's Claude Mythos AI model [1]. The email went to technology and cybersecurity chiefs at the Department of Defense, Treasury, Commerce, Homeland Security, Justice, and State, among others [1][2].

The move came just five days after Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell summoned Wall Street CEOs to an urgent meeting about the very same model — because Mythos had been rated "substantially more capable at cyber offense" than any AI system previously tested by the government's AI Security Institute [3][4]. Finance ministers and central bankers across Europe were simultaneously raising alarms about what Mythos could mean for the stability of global financial infrastructure [5].

The federal government, in other words, is rushing to embrace a tool that much of the world's financial establishment considers a serious threat.

What Mythos Is and Why It Matters

Claude Mythos Preview is Anthropic's most capable model to date, a general-purpose language model that the company says represents "a step change" in AI performance [6]. Its defining characteristic is an extraordinary ability in computer security: during testing, Mythos identified and exploited thousands of zero-day vulnerabilities — previously unknown security flaws — in every major operating system and every major web browser [6][7]. Some of these bugs had survived decades of human review and millions of automated security tests; one vulnerability dated back 27 years [7].

The model's existence first became public on March 26, 2026, when a CMS misconfiguration exposed a draft blog post in which Anthropic acknowledged that Mythos poses "unprecedented cybersecurity risks" [8]. The company officially announced the model on April 7 under a controlled-release program called Project Glasswing, through which select organizations — including Amazon, Apple, Cisco, CrowdStrike, Microsoft, and Palo Alto Networks — gained access for defensive cybersecurity purposes [9].

Research Publications on "AI cybersecurity vulnerability"
Source: OpenAlex
Data as of Jan 1, 2026CSV

Academic research on AI-cybersecurity intersections has surged in recent years, with over 21,000 papers published in 2025 alone. But Mythos represents a practical capability jump that the research literature had not yet anticipated — a single model that can autonomously discover critical vulnerabilities at scale.

The Federal Deployment Plan

Barbaccia's email did not commit to a firm timeline or specify how agencies would use Mythos. It told officials to expect more information "in the coming weeks" [1]. But the signal was clear: the White House intends for federal agencies to gain access to the model, likely for defensive cybersecurity — scanning government networks and software for the same kinds of vulnerabilities Mythos has already found in commercial systems.

This represents a sharp reversal. On February 27, 2026, President Trump posted on Truth Social directing all federal agencies to "immediately cease" use of Anthropic's AI technology [10]. Defense Secretary Pete Hegseth simultaneously designated Anthropic a "supply chain risk to national security" — an authority normally reserved for foreign adversaries like Huawei — over a dispute about AI safeguards [10][11]. Agencies began shedding Anthropic contracts [12]. A federal judge in California subsequently blocked the supply-chain-risk designation, and the litigation remains ongoing [13].

Now, barely seven weeks later, the administration appears to be reversing course. Axios reported that Trump officials were "negotiating access to Anthropic's Mythos despite blacklist" [2], and Gizmodo described the White House as "ready to drop its Anthropic beef and embrace the spooky new model" [14].

The scale of potential access is significant. The federal civilian workforce alone exceeds 2 million employees across more than 400 agencies. The email targeted Cabinet departments, which collectively employ the majority of federal workers. No projected annual contract value has been publicly disclosed for the Mythos-specific arrangement. The prior GSA "OneGov" deal, signed in August 2025, offered Claude AI access across all branches of government for a nominal $1 — a loss-leader structure designed to embed Anthropic's technology in federal workflows [15].

Federal AI Use Cases (OMB Inventory)
Source: OMB AI Inventory
Data as of Apr 17, 2026CSV

Federal AI use cases have tripled from roughly 700 in OMB's 2023 inventory to 2,133 in 2024 [16]. Adding Mythos's cybersecurity capabilities would expand that footprint further, though the precise number of employees who would gain access remains undefined.

The Financial Sector's Alarm

The finance sector's concerns about Mythos are specific and grounded in the model's demonstrated capabilities. The core fear is not about AI replacing financial analysts — it is about what happens when a model that can crack decades-old software vulnerabilities is released into a world where banking infrastructure runs on legacy systems.

On April 10, Bessent and Powell convened CEOs of major U.S. banks to discuss the cybersecurity implications [3]. The message was part warning, part encouragement: the government urged Wall Street firms to test Mythos themselves so they could identify and patch vulnerabilities in their own systems before adversaries used the model offensively [17].

In Europe, the European Central Bank announced plans to question bankers about Mythos-related risks [5][18]. German banks began their own independent risk assessments [19]. The American Securities Association warned that Mythos posed a risk to the SEC's Consolidated Audit Trail, the market-tracking database that records every equity and options trade in the United States [20].

The concern is systemic: if Mythos (or a derivative of its capabilities) enables attackers to breach critical financial infrastructure simultaneously across multiple institutions, the result could be cascading failures rather than isolated incidents. Banks run on interconnected systems — payment networks, clearinghouses, messaging protocols — where a single compromised node can propagate disruption.

Bessent himself, however, has also struck a more optimistic note. On April 15, he called Mythos "a breakthrough in the AI race against China," framing it as a strategic asset that would keep America ahead of rival nations in both AI capability and cybersecurity defense [21].

How Mythos Compares to Prior Federal AI Deals

The federal government has a mixed history with large-scale AI and cloud technology procurements. The most notorious example is JEDI (Joint Enterprise Defense Infrastructure), a $10 billion, 10-year Department of Defense cloud contract awarded to Microsoft in 2019 [22]. It was challenged by Amazon in court and ultimately canceled by the Pentagon in 2021. Its replacement, JWCC (Joint Warfighting Cloud Capability), was split among Amazon, Microsoft, Google, and Oracle [23].

DoD AI Contracts Awarded (July 2025)
Source: Nextgov/FCW
Data as of Jul 1, 2025CSV

In July 2025, the Pentagon's Chief Digital and Artificial Intelligence Office awarded contracts of up to $200 million each to Anthropic, Google, OpenAI, and xAI for advanced AI capabilities [24]. Anthropic was notably the only company whose services were cleared for use on the Defense Department's classified networks [24].

Major Federal AI/Cloud Contracts
Source: Various Sources
Data as of Apr 17, 2026CSV

The Mythos deployment differs from these precedents in a critical way: it is not a standard procurement contract being competitively bid. It is an OMB-coordinated effort to provide access to a specific model from a specific company, driven by the model's unique cybersecurity capabilities rather than a general services competition. The legal basis appears to rest on the December 11, 2025 Executive Order, "Ensuring a National Policy Framework for Artificial Intelligence," which directed a whole-of-government effort to establish federal AI policy and gave agencies broad latitude to adopt AI tools [25][26].

Anthropic's projected 2026 revenue is $18 billion [6], dwarfing its government business. But federal adoption at scale would carry strategic significance beyond revenue — it would establish Anthropic as the default AI provider for one of the world's largest employers.

What Would Mythos Be Used For?

The OMB email did not specify use cases beyond cybersecurity [1]. But the model's general-purpose capabilities and the range of agencies contacted suggest potential applications beyond vulnerability scanning.

Federal agencies currently use AI across 2,133 documented use cases, spanning benefits processing, regulatory analysis, fraud detection, tax enforcement, immigration screening, and national security analysis [16]. Mythos, as Anthropic's most capable model, could be applied to any of these if agencies choose to expand beyond defensive cyber.

The legal authority question is significant. The December 2025 Executive Order provides the broadest current framework, directing agencies to adopt AI while preempting state AI laws [25]. A separate July 2025 Executive Order, titled "Preventing Woke AI in the Federal Government," set requirements for AI systems the government purchases [27]. Neither explicitly authorizes or restricts specific use cases like benefits adjudication or regulatory review, leaving agencies with considerable discretion — and Congress with limited direct oversight over those decisions.

The March 2026 White House National Policy Framework for AI included legislative recommendations for Congress but has not yet resulted in enacted legislation [28].

Global Governance Comparisons

The U.S. approach to deploying AI in government administration differs markedly from peer nations.

The European Union's AI Act, which entered into force in August 2024, classifies AI systems by risk tier and imposes strict requirements on "high-risk" applications, a category that explicitly includes AI used in public services like benefits administration and law enforcement [29]. The Act requires conformity assessments, human oversight, transparency to affected individuals, and registration in a public EU database before deployment. The U.S. has no equivalent binding framework.

China takes a centralized approach, mandating pre-approval of algorithms and enforcing alignment with state ideology. The model enables rapid government AI deployment but limits public transparency and independent oversight [30].

The United Kingdom has adopted a "principles-first" regulatory approach, delegating oversight to existing sector regulators — the Financial Conduct Authority for finance, the Information Commissioner's Office for data protection, and so on — coordinated through the Digital Regulation Cooperation Forum [31].

Under EU standards, deploying Mythos for any federal decision-making that affects individuals' rights or access to services would require a formal conformity assessment, explainability mechanisms, and a right to human review. The current U.S. deployment plan, as described in the OMB email, includes none of these requirements explicitly, though the email did reference "protections" that have not been detailed [1].

Who Is Liable When AI Gets It Wrong?

If Mythos produces a consequential error in a federal decision — a wrongly denied benefit, a flawed regulatory ruling, a misidentified security threat — the question of legal liability under current U.S. law is unresolved.

Automated public benefits decisions have already incorrectly rejected eligible applicants under existing, less capable AI systems. These errors have spurred improper fraud allegations and overpayment recollection proceedings, costing state governments millions while leaving affected individuals without critical benefits during lengthy appeals [32]. The use of AI in government decision-making is "almost entirely unregulated and largely opaque," according to the Electronic Privacy Information Center [33].

Under existing U.S. tort law, liability when AI fails could fall on the developer (Anthropic), the deployer (the federal agency), or the operator. This remains contested in courts [34]. The Federal Tort Claims Act permits lawsuits against the government for certain negligent acts, but sovereign immunity doctrines and discretionary function exceptions create significant barriers for individuals harmed by AI-assisted government decisions [35].

No current legislation or executive order establishes a specific appeals mechanism for citizens affected by AI-driven federal decisions, nor does any proposed remedy specifically address errors produced by Mythos or models of comparable capability.

The Proponents' Case

The White House and Anthropic's defenders make several arguments for why the benefits of federal Mythos access outweigh the risks.

First, the cybersecurity argument: if Mythos can find vulnerabilities that have persisted for decades in critical software, the federal government needs to find those vulnerabilities before adversaries do. Defensive access is, in this framing, an urgent national security imperative [9][21].

Second, Anthropic's controlled-release approach through Project Glasswing is itself a safeguard. Rather than releasing the model publicly, Anthropic is providing access only to vetted organizations that can use the findings to patch vulnerabilities. The company's partners — including Microsoft, Amazon, and CrowdStrike — are among the firms best positioned to remediate the issues Mythos identifies [9].

Third, Bessent has framed the strategic competition argument explicitly: keeping Mythos capabilities inside the U.S. government and allied private sector, rather than ceding ground to China or other adversaries, is essential to maintaining national security advantages [21].

Fourth, the GAO's existing oversight infrastructure provides some accountability. Since 2018, GAO has issued nearly 50 products on AI and currently has 20 ongoing projects, including technology assessments of generative AI [36]. The agency has identified 10 executive branch oversight groups with roles in federal AI use and 94 AI-related government-wide requirements [37]. Its Accountability Framework for Federal Agencies (GAO-21-519SP) establishes principles for responsible AI use that, proponents argue, provide guardrails even without new legislation [38].

Oversight Gaps and the Road Ahead

The counterargument is that existing oversight mechanisms were not designed for a model of Mythos's capability. GAO's 35 recommendations to 19 agencies regarding AI implementation remain partially unaddressed [37]. The agency itself has warned that federal AI procurement processes lack adequate mechanisms for collecting and sharing lessons learned [39].

No congressional hearing has been scheduled specifically on the Mythos federal deployment. The OMB email indicates that more details will come "in the coming weeks," but there is no indication that an inspector general review or GAO audit has been triggered by the deployment decision [1]. The lack of a formal procurement process — this is an OMB-coordinated access arrangement, not a competitively bid contract — means that standard acquisition oversight mechanisms may not apply.

The timeline from announcement to deployment remains unclear. The April 15 email is the first formal step. If the administration follows the pattern of prior AI rollouts, a phased approach — starting with cybersecurity applications and expanding to other use cases — is likely. But the political dynamics are unusual: an administration that designated Anthropic a national security threat seven weeks ago is now seeking to embed its most powerful model across the federal government.

The resolution of the ongoing federal litigation over Anthropic's supply-chain-risk designation will shape what happens next. If the courts fully block the designation, agencies would face no legal barrier to expanded Anthropic adoption. If the designation is upheld on appeal, the White House would need either congressional authorization or a formal rescission of the designation to proceed — creating exactly the kind of oversight checkpoint that critics argue is needed.

What is clear is that the decisions being made now — in OMB emails, in classified briefings, in emergency meetings between Treasury secretaries and bank CEOs — will set precedents for how the most capable AI systems are integrated into the machinery of government. The question is whether those precedents will include the accountability structures that match the technology's power.

Sources (39)

  1. [1]
    White House Moves to Give US Agencies Anthropic Mythos Accessbloomberg.com

    Gregory Barbaccia, federal CIO at OMB, emailed officials at Cabinet departments stating OMB is setting up protections to allow agencies to begin using the Mythos AI tool.

  2. [2]
    Trump officials negotiating access to Anthropic's Mythos despite blacklistaxios.com

    White House officials are negotiating access to Anthropic's Mythos AI for federal agencies despite the administration's prior designation of Anthropic as a supply chain threat.

  3. [3]
    Powell, Bessent discussed Anthropic's Mythos AI cyber threat with major U.S. bankscnbc.com

    Treasury Secretary Bessent and Fed Chair Powell summoned Wall Street leaders to discuss concerns that Mythos will usher in an era of heightened cyber risk.

  4. [4]
    Why Anthropic's new Mythos AI model has Washington and Wall Street worked upeuronews.com

    Mythos was rated substantially more capable at cyber offense than any model previously tested by the government's AI Security Institute.

  5. [5]
    Finance ministers and top bankers raise serious concerns about Mythos AI modelca.finance.yahoo.com

    Finance ministers and top bankers raise serious concerns about Anthropic's Mythos AI model and its potential cybersecurity implications.

  6. [6]
    Anthropic 'Mythos' AI model representing 'step change' leaked via CMS misconfigurationfortune.com

    A draft blog post revealed Anthropic believes Mythos poses unprecedented cybersecurity risks. The model represents a step change in AI performance.

  7. [7]
    How Anthropic Discovered Mythos AI Was Too Dangerous For Releasebloomberg.com

    Mythos identified vulnerabilities dating back 27 years that had survived decades of human review and millions of automated security tests.

  8. [8]
    Anthropic debuts preview of powerful new AI model Mythostechcrunch.com

    Anthropic officially announced Claude Mythos Preview on April 7, 2026, as part of its Project Glasswing controlled release program.

  9. [9]
    Project Glasswing: Securing critical software for the AI eraanthropic.com

    Partner organizations include Amazon, Apple, Broadcom, Cisco, CrowdStrike, the Linux Foundation, Microsoft, and Palo Alto Networks.

  10. [10]
    Anthropic labeled a supply chain risk, banned from federal government contractsreason.com

    On February 27, 2026, Trump directed all federal agencies to immediately cease use of Anthropic's AI technology.

  11. [11]
    Agencies begin to shed Anthropic contracts following Trump's directivenextgov.com

    Defense Secretary Hegseth designated Anthropic a supply chain risk to national security, using authority normally reserved for foreign adversaries.

  12. [12]
    Agencies begin to shed Anthropic contracts following Trump's directivenextgov.com

    Federal agencies began shedding Anthropic contracts following Trump's February 2026 directive.

  13. [13]
    Judge blocks Trump administration from limiting Anthropic's contractsnbcnews.com

    A federal judge in California blocked the Trump administration from designating Anthropic as a supply chain risk and cutting off its federal work.

  14. [14]
    White House Ready to Drop Anthropic Beefgizmodo.com

    White House is reportedly ready to drop its Anthropic beef and embrace the spooky new model.

  15. [15]
    GSA Strikes OneGov Deal with Anthropic to Offer Claude AI for $1gsa.gov

    GSA struck a OneGov deal with Anthropic in August 2025 to offer Claude AI access to all branches of government for $1.

  16. [16]
    Assessing AI adoption across federal governmentbrookings.edu

    OMB AI inventories show federal AI use cases grew from roughly 700 in 2023 to 2,133 in 2024.

  17. [17]
    US Urges Wall Street Banks to Test Anthropic's Mythos AI Modelbloomberg.com

    The government urged Wall Street firms to test Mythos to identify and patch vulnerabilities in their own systems before adversaries could use them offensively.

  18. [18]
    ECB to Quiz Bankers About Risks of Anthropic's New AI Modelinsurancejournal.com

    European Central Bank supervisors are set to quiz bankers about the risks that Mythos might supercharge cyberattacks.

  19. [19]
    German banks assess risks from Anthropic AI model amid cyber concernsinvezz.com

    German banks are assessing risks from the Anthropic AI model amid cyber concerns.

  20. [20]
    Mythos Poses Risk to SEC Market-Tracking Database, Group Saysbloomberg.com

    The American Securities Association warns Mythos poses a risk to the SEC's Consolidated Audit Trail, which records every equity and options trade.

  21. [21]
    Bessent Calls Anthropic's Mythos a Breakthrough in AI Race Against Chinabloomberg.com

    Treasury Secretary Bessent called Mythos a breakthrough that will keep America ahead of China in AI capability.

  22. [22]
    Joint Enterprise Defense Infrastructureen.wikipedia.org

    JEDI was a $10 billion, 10-year DoD cloud computing contract awarded to Microsoft in 2019, later canceled after legal challenges.

  23. [23]
    DoD picks Amazon, Microsoft, Google and Oracle for JWCCfederalnewsnetwork.com

    The Pentagon selected Amazon, Microsoft, Google and Oracle for the multibillion-dollar JWCC project to replace JEDI.

  24. [24]
    Pentagon awards multiple companies $200M contracts for AI toolsnextgov.com

    The DoD Chief Digital and AI Office awarded $200 million contracts each to Anthropic, Google, OpenAI, and xAI for advanced AI capabilities.

  25. [25]
    Unpacking the December 11, 2025 Executive Order on AIsidley.com

    President Trump issued an Executive Order directing a whole-of-government effort to establish a federal AI policy framework.

  26. [26]
    Ensuring a National Policy Framework for Artificial Intelligencewhitehouse.gov

    Executive Order directs federal AI policy framework and preemption of state AI laws, with AI Litigation Task Force and conditional federal funding provisions.

  27. [27]
    Trump's AI Executive Order 14365 - Preventing Woke AI in the Federal Governmentlawandtheworkplace.com

    July 2025 Executive Order sets requirements for AI systems that the federal government purchases.

  28. [28]
    White House National Policy Framework for AI Legislative Recommendationsropesgray.com

    The March 2026 White House National Policy Framework for AI outlines legislative recommendations for Congress.

  29. [29]
    AI Regulations in 2025: US, EU, UK, Japan, China & Moreanecdotes.ai

    The EU AI Act employs a four-tier risk structure with strict requirements for high-risk applications in public services.

  30. [30]
    AI policy frameworks in China, EU and USsciencedirect.com

    China mandates pre-approval of algorithms with centralized state-led AI governance model.

  31. [31]
    AI Governance Global Cheat Sheetgradientflow.com

    UK takes a principles-first approach with sector regulators handling AI oversight, coordinated through the Digital Regulation Cooperation Forum.

  32. [32]
    Government Use of AIepic.org

    Automated public benefits decisions have incorrectly rejected eligible applicants. Government AI use is almost entirely unregulated and largely opaque.

  33. [33]
    Government Use of AI - EPICepic.org

    Individuals who have had public benefits claims wrongly rejected face a challenging battle to correct AI errors.

  34. [34]
    Legal Liability for AI Decisions: Who Is Responsible When AI Fails?hfw.com

    Under existing tort law, liability when AI fails could fall on the developer, deployer, or operator — the question remains contested.

  35. [35]
    Liability for Harms from AI Systemsrand.org

    RAND analysis of liability frameworks for AI-caused harms, including sovereign immunity and Federal Tort Claims Act limitations.

  36. [36]
    Artificial Intelligence: Federal Efforts Guided by Requirements and Advisory Groupsgao.gov

    GAO has issued nearly 50 products on AI with 20 ongoing projects, identified 10 oversight groups and 94 AI-related requirements.

  37. [37]
    AI: Agencies Are Implementing Management and Personnel Requirementsgao.gov

    GAO reviewed AI implementation at major agencies and made 35 recommendations to 19 agencies to fully implement federal AI requirements.

  38. [38]
    An Accountability Framework for Federal Agencies and Other Entitiesgao.gov

    GAO Accountability Framework establishes principles for responsible AI use in federal government.

  39. [39]
    Artificial Intelligence Acquisitions: Agencies Should Collect and Apply Lessons Learnedgao.gov

    GAO warned that agencies may miss opportunities to identify best practices, urging collection and sharing of AI acquisition insights.