Revision #1
System
8 days ago
The Pentagon Branded Anthropic a National Security Threat. A Federal Judge Just Said That's Unconstitutional.
On March 26, 2026, U.S. District Judge Rita F. Lin issued a 43-page preliminary injunction that halted one of the most unusual national security actions in recent memory: the Pentagon's designation of Anthropic, a San Francisco-based artificial intelligence company, as a "supply chain risk" [1]. The label—previously applied only to entities connected to foreign adversaries—was leveled against an American company whose primary offense, according to the court, was refusing to strip safety restrictions from its AI technology [2].
The ruling blocks both the supply chain risk designation and a separate executive order from President Donald Trump directing all federal agencies to stop using Anthropic's Claude AI system [3]. Judge Lin delayed implementation by one week to allow the government to appeal [1].
"Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government," Lin wrote [1].
How a $200 Million Contract Became a Constitutional Crisis
The dispute traces back to a $200 million Pentagon contract signed in July 2025 that deployed Anthropic's Claude models on the military's classified network—making it the first AI system to achieve that level of integration [4]. Under the original terms, the Pentagon agreed to abide by Anthropic's acceptable use policy, which barred use of Claude for mass surveillance of American citizens or fully autonomous weapons systems [5].
By early 2026, the relationship had soured. The Pentagon, now under Defense Secretary Pete Hegseth, demanded that Anthropic grant the Department of Defense "full, unrestricted access to Anthropic's models for every lawful purpose in defense of the Republic" [6]. Anthropic CEO Dario Amodei rejected the demand. In a public statement, the company said it "cannot in good conscience accede to their request," arguing that frontier AI models are not yet reliable enough for autonomous weapons and that mass domestic surveillance "constitutes a violation of fundamental rights" [7].
On February 24, Hegseth issued an ultimatum: comply by Friday or face consequences [8]. Three days later, after Anthropic declined, the Trump administration ordered all federal agencies and military contractors to cease doing business with the company [9]. On March 6, the Pentagon formally notified Anthropic that it had been designated a "supply chain risk" under 10 U.S.C. § 3252, effective immediately [10].
An Unprecedented Use of Supply Chain Authority
The Federal Acquisition Supply Chain Security Act (FASCSA), enacted in 2018, created legal mechanisms to protect government procurement from foreign adversaries. The statute's legislative history references threats from Kaspersky Lab, Huawei, and ZTE—companies with documented ties to hostile governments [11].
Before Anthropic, the designation had been used publicly exactly once: against Acronis AG, a Swiss cybersecurity firm with reported ties to Russia, in September 2025—and that order was limited to intelligence community contracts [12]. Anthropic became the first American company ever to receive the label [2].
The designation carries severe consequences. Under FAR 52.204-30, defense contractors must certify they do not use products from designated entities, report any existing usage within three business days, and submit mitigation plans within ten [12]. The obligations flow down to all subcontractor tiers, meaning companies like Amazon, Microsoft, and Palantir—major defense contractors that also use Claude—would need to sever ties [13].
The Court's Legal Reasoning
Judge Lin's ruling found the designation "likely both contrary to law and arbitrary and capricious" on multiple grounds [1].
First Amendment retaliation. The central holding was that the Pentagon punished Anthropic for protected speech. "Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation," Lin wrote [14]. The court traced a direct line from Anthropic's public statements about AI safety to the escalating government response, finding the timing and severity of the designation inconsistent with genuine security concerns.
Statutory overreach. The supply chain risk statute targets threats from "an adversary" who may "sabotage, maliciously introduce unwanted function, or subvert" covered systems [11]. Lin found no basis for applying this framework to a domestic company engaged in a contract dispute. "The supply chain risk designation is usually reserved for foreign intelligence agencies and terrorists, not for American companies," she wrote [3].
Procedural deficiencies. The FASCSA framework requires interagency council review, a 30-day notice period, and an opportunity for the targeted company to respond before a designation takes effect [12]. The Pentagon bypassed these procedures entirely, moving from Hegseth's meeting with Amodei to formal designation in approximately three days [11]. The court also noted the government used § 3252—a narrower Pentagon-specific authority—apparently to avoid the procedural requirements of the broader FASCSA process [11].
Logical incoherence. The government simultaneously argued that Claude posed acute national security dangers requiring immediate elimination from military systems while military operations—including those related to the Iran conflict—continued using the technology [15]. Government attorney Eric Hamilton conceded during oral arguments that the designation does not actually prevent contractors from using Anthropic's technology for non-military work, undermining the stated urgency [16].
During the March 24 hearing, Lin pressed Hamilton on the government's reasoning: "That seems a pretty low bar," she said when told the designation was based on concerns about hypothetical future sabotage [17]. She characterized the Pentagon's approach as designating companies as supply chain risks for being "stubborn" and asking "annoying questions" [16].
The Pentagon's National Security Arguments
The government's case rested on several claims. Justice Department attorneys argued that Anthropic's refusal to grant unrestricted access created an "unacceptable risk to national security" because the company could theoretically modify its models in ways that harm military operations [18]. In a March 17 Pentagon declaration, officials also raised concerns about Anthropic's workforce, stating the company "employs a large number of foreign nationals to build and support its LLM products, including many from the People's Republic of China" [19].
Defense Secretary Hegseth framed the dispute in ideological terms, publicly accusing Anthropic of "corporate virtue-signaling" and "arrogance" [11]. President Trump called it a "RADICAL LEFT, WOKE COMPANY" in social media posts [11]. These statements became evidence against the government in court, as Judge Lin cited them as indicators of political retaliation rather than genuine security analysis [14].
Anthropic's attorney Michael Mongan countered that an actual saboteur would not publicly dispute contract terms. He argued that Claude cannot be remotely altered after government deployment and approval, and that the company's transparent advocacy is fundamentally incompatible with malicious intent [16].
Financial and Competitive Fallout
The financial stakes extend well beyond the original $200 million contract. Anthropic insiders have discussed potential losses in the "tens of billions of dollars" when accounting for direct government contracts and indirect effects on the company's contractor relationships [20]. The supply chain risk label, if sustained, would effectively bar Anthropic from the entire defense industrial base.
OpenAI moved quickly to fill the vacuum. Within days of the Anthropic designation, OpenAI announced its own Pentagon deal, publicly posting its agreement with the Department of Defense [21]. The timing raised questions about whether competitive dynamics influenced the government's actions. A Jacobin investigation reported that a senior Pentagon AI official holds a financial stake in one of Anthropic's rivals [22].
Defense One reported that replacing Anthropic's AI tools across Pentagon systems would take months, with some federal officials calling Claude "indispensable" to maintaining what they estimated as a 6-to-12-month U.S. lead over China in military AI applications [15].
Broader Implications: Precedent, Allies, and Adversaries
Legal analysts and national security experts have raised alarms about the ruling's broader significance.
Domestic precedent. The Council on Foreign Relations warned that the designation risks chilling innovation across the technology sector [23]. Defense contractors including Boeing, Lockheed Martin, and Amazon have remained publicly silent despite what CFR described as understanding "the difference between a genuine security finding and an abuse of executive power" [23]. The R Street Institute argued the action could deter technology companies from engaging with military contracts altogether [24].
Foreign adversary designations. The Lawfare analysis noted that prior court challenges to Pentagon designations—including Luokung Technology Corp. v. Department of Defense (2021) and Xiaomi Corp. v. Department of Defense (2021)—resulted in courts granting relief when they found designations to be "arbitrary and capricious" [11]. However, those cases involved Chinese companies challenging their inclusion on military blacklists, a different legal posture from Anthropic's situation. The Anthropic ruling's emphasis on First Amendment protections and the domestic nature of the company may limit its direct applicability to foreign entity designations.
International competition. During the weeks surrounding the Anthropic dispute, five major Chinese AI models launched: Alibaba's Qwen 3.5, Zhipu's GLM-5, MiniMax's M2.5, ByteDance's Doubao 2.0, and Moonshot's Kimi K2.5 [23]. The CFR analysis highlighted the irony that "no Chinese AI firm has been designated a supply chain risk by the U.S. government. Only Anthropic, arguably America's most safety-conscious frontier company, enjoys that distinction" [23].
Allied government confidence. Multiple former federal judges filed amicus briefs supporting Anthropic, raising concerns about the precedent set by weaponizing procurement authorities against domestic companies [25]. The ACLU and Microsoft also filed supporting briefs [1].
What Happens Next
The preliminary injunction is not a final ruling. Several paths remain open.
Government appeal. Judge Lin granted a one-week stay to allow the Trump administration to appeal to the Ninth Circuit Court of Appeals [1]. The government could seek an emergency stay of the injunction while pursuing that appeal.
Revised designation. The Pentagon could attempt to issue a new supply chain risk determination with proper procedural safeguards—following the FASCSA framework with interagency review, a 30-day notice period, and an opportunity for Anthropic to respond [12]. Whether the underlying rationale would survive scrutiny remains questionable given the court's finding that the action was retaliatory.
Parallel litigation. Anthropic has filed a separate lawsuit in the U.S. Court of Appeals for the D.C. Circuit seeking formal review of the Defense Department's determination under FASCSA's judicial review provisions [26]. That case proceeds on an independent track.
Negotiations. Axios reported that Anthropic insiders believe the two sides were "within inches of a deal" before the relationship collapsed, and are pushing CEO Amodei and Pentagon officials to revive talks [27]. Under-Secretary of Defense for Research and Engineering Emil Michael has been involved in back-channel discussions [4].
The Underlying Question
At its core, this case poses a question that extends beyond one company's contract dispute: can the executive branch use national security procurement authorities to punish domestic companies for policy disagreements?
Judge Lin's answer, for now, is no. The supply chain risk framework was built to address threats from foreign adversaries—companies controlled by hostile governments that might embed backdoors or sabotage critical systems. Applying that same framework to an American company that publicly disagrees with how the military should use its technology represents, in the court's view, a fundamental misuse of the authority.
The case also surfaces an unresolved tension in AI governance. The Pentagon argues that military decisions about lawful AI deployment should not be constrained by private companies' ethical preferences [16]. Anthropic contends that AI safety guardrails serve both ethical and operational purposes—that models not yet reliable enough for autonomous weapons pose genuine risks when deployed without restrictions [7].
Both positions have merit, and the resolution will shape the terms on which the U.S. government and its most capable AI companies work together at a moment when that partnership is widely seen as essential to national competitiveness.
Sources (26)
- [1]Judge blocks Pentagon's effort to 'punish' Anthropic by labeling it a supply chain riskcnn.com
A federal judge in California has indefinitely blocked the Pentagon's effort to punish Anthropic by labeling it a supply chain risk, in a stinging 43-page ruling.
- [2]Judge temporarily blocks Trump administration's Anthropic bannpr.org
Judge Rita Lin found the designation likely both contrary to law and arbitrary and capricious, noting it is usually reserved for foreign intelligence agencies and terrorists.
- [3]Pentagon labels AI company Anthropic a supply chain risk 'effective immediately'npr.org
The Defense Department officially informed Anthropic that it has been designated a supply chain risk, effective immediately.
- [4]Behind the Curtain: How Anthropic's Pentagon deal could get revivedaxios.com
Anthropic insiders believe they were within inches of a deal with the Pentagon and are pushing to quietly revive the talks.
- [5]Anthropic rejects latest Pentagon offer: 'We cannot in good conscience accede to their request'cnn.com
Anthropic rejected the Pentagon's latest offer, saying it cannot in good conscience allow its AI to be used for mass surveillance or autonomous weapons.
- [6]OpenAI sweeps in to snag Pentagon contract after Anthropic labeled 'supply chain risk'fortune.com
The Trump administration ordered military contractors and federal agencies to cease business with Anthropic; Hegseth declared the decision final.
- [7]Pentagon threatens to make Anthropic a pariah if it refuses to drop AI guardrailscnn.com
Hegseth told Amodei that if Anthropic does not allow its AI model to be used for all lawful purposes, the Pentagon would cancel the $200 million contract.
- [8]Trump orders US government to cut ties with Anthropic; Hegseth declares supply chain 'risk'abcnews.com
Trump administration orders all federal agencies and military contractors to cease doing business with Anthropic.
- [9]Anthropic sues Pentagon over rare 'supply chain risk' labelaxios.com
Anthropic sued the Pentagon, alleging its designation as a supply chain risk violates the company's First Amendment rights and exceeds government authority.
- [10]Pentagon's Anthropic Designation Won't Survive First Contact with Legal Systemlawfaremedia.org
Legal analysis of the FASCSA framework, procedural requirements, and why the designation faces serious legal vulnerabilities including ultra vires and First Amendment claims.
- [11]Pentagon Designates Anthropic a Supply Chain Risk — What Government Contractors Need to Knowmayerbrown.com
Analysis of FAR 52.204-30 compliance obligations, FASCSA framework requirements, and contractor impact of the Anthropic designation.
- [12]Pentagon's supply chain risk label for Anthropic narrower than initially implied, company sayscnn.com
The designation requires defense vendors and contractors to certify they don't use Anthropic's models in Pentagon work.
- [13]Anthropic wins preliminary injunction in DOD fight as judge cites 'First Amendment retaliation'cnbc.com
Judge Lin ruled that punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation.
- [14]It would take the Pentagon months to replace Anthropic's AI tools: sourcesdefenseone.com
Federal officials say Claude is indispensable and estimate the US is 6-12 months ahead of China in military AI partly because of Anthropic's technology.
- [15]Judge calls Pentagon's moves against AI firm Anthropic 'troubling'cbsnews.com
Judge Lin questioned whether the Pentagon designated Anthropic for being 'stubborn' and asking 'annoying questions,' and called the actions an attempt to cripple the company.
- [16]Judge presses DOD on why Anthropic was blacklisted: 'That seems a pretty low bar'cnbc.com
Judge Lin pressed the government on its reasoning, finding the justification for the supply chain risk designation insufficient.
- [17]DOD says Anthropic's 'red lines' make it an 'unacceptable risk to national security'techcrunch.com
The Pentagon argued that Anthropic's safety guardrails and refusal to grant unrestricted access constituted an unacceptable national security risk.
- [18]Pentagon: Anthropic's foreign workforce poses security risksaxios.com
Pentagon declaration states Anthropic employs a large number of foreign nationals including from the People's Republic of China.
- [19]Anthropic officially told by DOD that it's a supply chain risk even as Claude used in Irancnbc.com
The Pentagon designated Anthropic a supply chain risk while its Claude AI continued to be used in military operations including the Iran conflict.
- [20]OpenAI announces Pentagon deal after Trump bans Anthropicnpr.org
OpenAI moved to fill the vacuum left by Anthropic's blacklisting, announcing its own Pentagon deal within days.
- [21]A Top Pentagon AI Gatekeeper Has a Stake in Anthropic's Rivaljacobin.com
Investigation found a senior Pentagon AI official holds a financial stake in one of Anthropic's competitors.
- [22]Anthropic's Standoff With the Pentagon Is a Test of U.S. Credibilitycfr.org
CFR analysis warns the designation damages U.S. international credibility and notes no Chinese AI firm has received the supply chain risk label.
- [23]Anthropic, the Pentagon, and the AI Innovation Ecosystemrstreet.org
R Street Institute warns the action could deter technology companies from engaging with military contracts.
- [24]Former judges side with Anthropic and raise concerns about Pentagon's use of supply chain risk labelcnn.com
Multiple former federal judges filed amicus briefs supporting Anthropic and raising concerns about weaponizing procurement authorities.
- [25]Anthropic sues the Trump administration after it was designated a supply chain riskcnn.com
Anthropic filed lawsuits in both federal district court in California and the D.C. Circuit Court of Appeals.
- [26]Anthropic's case against the Pentagon could open space for AI regulationaljazeera.com
Analysis of how the legal battle could shape AI governance and the relationship between tech companies and government.