All revisions

Revision #1

System

about 6 hours ago

The Pentagon's $13.4 Billion Bet: Inside the AI-First Military Doctrine and the Companies Building It

On January 9, 2026, Secretary of Defense Pete Hegseth signed a memorandum directing the Department of War to become an "AI-first" warfighting institution — not as a future aspiration, but as an immediate operational mandate [1]. Four months later, on May 1, the Pentagon announced agreements with eight major technology companies to deploy their AI models on classified military networks, including environments rated at Impact Level 6 and Impact Level 7 — the Pentagon's most sensitive classifications [2][3].

The companies — Amazon Web Services, Google, Microsoft, NVIDIA, OpenAI, SpaceX, Reflection, and Oracle — will provide large language models and other AI tools "for lawful operational use" across defense networks [3]. One company, Anthropic, was conspicuously absent. Its exclusion, and the legal war that followed, has become the defining fault line in the debate over what guardrails should govern AI in warfare.

The Money

The FY2026 defense budget request, totaling $1.01 trillion, includes $13.4 billion specifically earmarked for AI and autonomy — a 131% increase over FY2025's $5.8 billion and more than twelve times the $1.1 billion allocated in FY2022 [4][5].

Pentagon AI & Autonomy Budget (FY2022-FY2026)
Source: DoD Budget Requests / Breaking Defense
Data as of Apr 21, 2026CSV

The $13.4 billion breaks down into several categories: $9.4 billion for unmanned and remotely operated aerial vehicles, $1.7 billion for maritime autonomous systems, $734 million for underwater capabilities, $210 million for autonomous ground vehicles, $1.2 billion for software and cross-domain integration, and $200 million specifically for AI and automation technology [4]. The Defense Innovation Unit's budget was raised to $2 billion, up from $1.3 billion the prior year [5].

Individual contract values for the newly signed agreements have not been fully disclosed, though Google's deal has been reported at $200 million [6]. The agreements grant these companies access to deploy their models on air-gapped classified networks, where the Pentagon's 3 million-plus workforce can build applications using them.

What's Already Running

The Pentagon is not starting from scratch. GenAI.mil, a secure platform launched in December 2025 with Google Cloud's Gemini as its first frontier model, already has more than 1.3 million active users out of 3 million with access [7][8]. Users have built over 100,000 AI agents on the platform, authorized to operate at Impact Level 5 — the highest level for unclassified but sensitive data [7].

Current applications are weighted toward administrative and logistical tasks. A Navy Recruiting Command user cut database-building time from years to three months. A Defense Logistics Agency lab director reduced the time to draft statements of work "from weeks to hours," helping secure $1 million in last-minute funding [8]. Pentagon officials say the platform supports intelligence analysis — processing satellite imagery and sensor data at scale — and warfighting applications including logistics planning and modeling [8].

The new classified-network agreements go further, pushing AI into Impact Level 6 and 7 environments to "streamline data synthesis, elevate situational understanding, and augment warfighter decision-making in complex operational environments" [3]. Hegseth's January memo specifies that AI will be part of "AI-enabled battle management and decision support, from campaign planning to kill chain execution" [1].

On April 29, Hegseth told the House Armed Services Committee that the Pentagon will "shortly announce a sub-unified command for autonomous warfare" [9]. U.S. Southern Command has already begun establishing the Southcom Autonomous Warfare Command (SAWC), which will field autonomous, semi-autonomous, and unmanned platforms [10].

The Anthropic Standoff

The most consequential absence from the May 1 announcement is Anthropic, maker of the Claude AI model. In July 2025, Anthropic became the first frontier AI company with a model approved for classified Pentagon networks [11]. The contract included adherence to Anthropic's acceptable use policy, which prohibited two specific applications: use in fully autonomous weapons systems capable of selecting and engaging targets without human intervention, and mass domestic surveillance of Americans [11][12].

The Pentagon sought to renegotiate, demanding Anthropic allow the military to use Claude "for all lawful purposes" without limitation [11]. Weeks of negotiations ended with the Pentagon setting a deadline of 5:01 p.m. on February 27, 2026 for Anthropic to accept the government's terms. Anthropic did not [11].

President Trump then ordered agencies to cease using Anthropic, and the Pentagon designated the company a "supply chain risk" — a classification normally reserved for foreign adversaries like Huawei and Kaspersky Lab [12][13]. Anthropic became the first American company to receive that designation [12].

A federal judge in California temporarily blocked the designation, ruling that "nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government" [14]. An appeals court later reversed that injunction, and the case remains in litigation [15]. A Congressional Research Service report has flagged the dispute as raising issues for Congress regarding the government's authority to punish companies for policy disagreements [16].

The Google Employee Revolt

Before signing its classified AI agreement, Google faced internal opposition that echoed one of the most consequential employee revolts in Silicon Valley history. More than 580 Google employees — including over 20 directors and vice presidents and senior DeepMind researchers — signed a letter to CEO Sundar Pichai urging him to refuse the classified Pentagon deal [6][17].

The letter argued that on air-gapped classified networks, Google cannot monitor how its AI is used, making "'trust us' the only guardrail against autonomous weapons and mass surveillance" [17]. The employees cited concerns that Google's AI systems could be used in "inhumane or extremely harmful ways" [17].

Google signed the deal anyway, reportedly with safety filters and a provision barring use "for the development of lethal autonomous weapons systems or domestic surveillance without human oversight and control" [6]. Whether those provisions are enforceable on classified networks where the company has no monitoring access is an open question.

The protest drew direct comparisons to 2018, when roughly 4,000 Google employees signed a petition opposing Project Maven, leading Google not to renew that contract [6]. This time, the company reached the opposite conclusion.

The Legal and Oversight Framework

The primary U.S. policy governing autonomous weapons is DoD Directive 3000.09, last updated in January 2023 [18]. It requires that autonomous and semi-autonomous weapon systems be designed to "allow commanders and operators to exercise appropriate levels of human judgment over the use of force" [18]. The word "appropriate" is deliberately flexible — the directive acknowledges that the required level of human judgment "can differ across weapon systems, domains of warfare, types of warfare, operational contexts, and even across different functions in a weapon system" [18].

Section 1061 of the FY2026 National Defense Authorization Act added a requirement for congressional notification whenever a waiver is issued under the directive [19]. Human Rights Watch and the Harvard Law School International Human Rights Clinic have noted that the directive's flexible language falls short of requiring "meaningful human control" — a standard that many international law scholars and civil society groups argue is required under the Geneva Conventions' principles of distinction, proportionality, and precaution [20][18].

Under international humanitarian law, no treaty explicitly bans autonomous weapons. But the 1949 Geneva Conventions and their 1977 Additional Protocol I impose obligations that many legal scholars argue implicitly require meaningful human control, particularly over target selection and engagement [21]. The UN Secretary-General has called on states to conclude a treaty prohibiting lethal autonomous weapon systems that function without human control by 2026, but major military powers — including the United States, China, and Russia — have opposed binding regulation [21].

Within classified environments, oversight becomes structurally different. Companies deploying AI models on air-gapped IL6 and IL7 networks cannot audit how those models are used. External safety testing, red-teaming, and the transparency mechanisms that characterize commercial AI deployment do not transfer to classified settings in the same form [17].

The Case For — and Against — Civilian AI in the Military

Proponents of the new agreements argue that bringing commercial AI companies into defense work improves safety rather than undermining it. The reasoning: major tech companies have invested billions in AI safety research, employ dedicated red teams, and operate under public scrutiny that traditional defense contractors — Lockheed Martin, Raytheon, Northrop Grumman — historically have not faced [22].

Google's agreement reportedly includes explicit prohibitions on lethal autonomous weapons and unsupervised surveillance [6]. The argument is that Silicon Valley's safety culture, imperfect as it is, introduces checks that would not exist if the Pentagon relied solely on defense-industry incumbents or built systems in-house.

Critics counter that these safeguards are only as strong as the enforcement mechanism — and on classified networks, there is none that the companies control. Once a model is deployed behind an air gap, the Pentagon determines its applications [17]. The Anthropic case demonstrates what happens when a company tries to maintain restrictions: it gets replaced [11][12].

Sarah Holewinski, who formerly ran the Pentagon's Civilian Protection Center of Excellence before it was disbanded, has warned that the current trajectory prioritizes speed and lethality over civilian protection [9]. Hegseth has publicly expressed disdain for "stupid rules of engagement" designed to minimize civilian harm [9].

Clearances, Workers, and Dissent

The classified-network agreements will require significant numbers of cleared personnel at each participating company. Exact figures have not been disclosed, but work at IL6 and IL7 requires Secret and Top Secret clearances respectively [3].

For employees who develop concerns about specific classified applications, the legal landscape is constrained. Federal law protects contractor employees from retaliation for disclosing waste, fraud, or abuse — but only if the disclosure follows rules governing classified information [23]. A disclosure that includes classified material outside proper channels is not protected [23]. Employees with security clearances who blow the whistle face heightened risks: loss of clearance often means loss of employment [23].

The Intelligence Community Whistleblower Protection Act provides a channel for reporting concerns to inspectors general and congressional intelligence committees, but employees at commercial AI companies working under defense contracts exist in an ambiguous zone — they are not intelligence community employees, and the protections available to them are narrower than those for government personnel [23].

The Adversary Landscape

The Pentagon's AI-first doctrine is explicitly framed as a response to competition from China and Russia. Hegseth's January memo stated that "Military AI is going to be a race for the foreseeable future" and that "speed wins" [1].

China's People's Liberation Army has adopted a doctrine of "multi-domain precision warfare" (MDPW), which integrates AI into command-and-control, intelligence surveillance and reconnaissance, and multi-theater strike coordination [24]. A December 2025 Pentagon report noted that China's commercial and academic AI sectors had made significant progress on large language models, narrowing the performance gap with leading U.S. models [25]. Military analysts have warned that China may be ahead in certain AI weapon developments [24].

Russia has deployed AI primarily for ISR processing and autonomous drone operations, using AI-guided FPV drones capable of independent target selection in Ukraine [24]. Russian military AI spending is lower than that of the U.S. or China, though Moscow approved a 30% increase in military outlays for 2025 [24]. U.S., Chinese, and Russian forces are considered the "tier one" military AI powers [24].

Lessons from Previous Pentagon AI Programs

The current push follows a mixed track record. Project Maven, launched in April 2017 with $70 million in initial funding, was designed to apply computer vision to surveillance footage [26]. Its budget grew to $221 million by FY2020 and $250 million in FY2021 [26]. In May 2024, the Pentagon awarded Palantir an initial $480 million, five-year contract for Maven, later increasing the contract ceiling by $795 million [27]. The program has been through multiple organizational homes — the Algorithmic Warfare Cross-Functional Team, the Joint Artificial Intelligence Center (established 2018), and finally the Chief Digital and Artificial Intelligence Office (CDAO, established 2022) [26].

The JEDI (Joint Enterprise Defense Infrastructure) cloud contract, originally a single-award $10 billion program, was scrapped in 2021 after years of legal challenges from Amazon, which alleged political interference by the Trump administration in awarding it to Microsoft. Its replacement, the Joint Warfighting Cloud Capability (JWCC), split the work among multiple vendors [28].

Research Publications on "military artificial intelligence autonomous weapons"
Source: OpenAlex
Data as of Jan 1, 2026CSV

Academic research on military AI and autonomous weapons has surged, peaking at 2,930 papers in 2024 — a reflection of growing scholarly attention to the questions the Pentagon is now operationalizing at scale [29].

The current initiative is structured differently from its predecessors in one key respect: rather than building bespoke systems through traditional defense procurement, it brings commercial off-the-shelf AI models directly onto military networks. Hegseth's memo mandates that "the latest and greatest AI models" be available to military users within 30 days of their public release [1]. This speed requirement is a direct response to the years-long procurement cycles that delayed earlier programs.

Whether this approach avoids the cost overruns and delivery failures of programs like JEDI and early Maven remains to be seen. The structural incentive is clear: commercial models are already built and tested. The structural risk is equally clear: commercial models were not built for warfare, and the safety testing they underwent was not designed for military application.

What Comes Next

The Pentagon's AI-first doctrine is no longer a planning document. With $13.4 billion allocated, eight companies on classified networks, over a million active users on GenAI.mil, and an autonomous warfare command being stood up, the infrastructure is being built in real time.

The unresolved questions are about governance. Who audits AI use on air-gapped networks? What happens when a model produces a recommendation in a kill chain — and it's wrong? How do "appropriate levels of human judgment" get defined when the entire doctrine is premised on speed as the decisive variable?

The Anthropic litigation will test whether the government can punish companies that insist on safety restrictions. The Google employee letter will test whether internal dissent changes corporate behavior or merely documents it. And the new classified agreements will test whether the safety cultures of Silicon Valley can survive contact with the operational demands of the U.S. military — or whether, once behind the air gap, those cultures cease to exist.

Sources (29)

  1. [1]
    Artificial Intelligence Strategy for the Department of War and the Institutionalization of an AI-First Militaryfedcontractpros.com

    Secretary of War Pete Hegseth issued a memorandum directing the Pentagon to become an 'AI-first' war-fighting institution, with AI integrated from campaign planning to kill chain execution.

  2. [2]
    Top AI companies agree to work with Pentagon on secret datawashingtonpost.com

    Seven leading AI companies reached deals to deploy their technology in classified Pentagon computer networks, with an eighth company, Oracle, added hours later.

  3. [3]
    Pentagon clears 8 tech firms to deploy their AI on its classified networksbreakingdefense.com

    The AI systems will be integrated into the DOD's Impact Level 6 and Impact Level 7 networks to streamline data synthesis and augment warfighter decision-making.

  4. [4]
    Pentagon's $13.4B AI Budget: Every Dollar in 2026labla.org

    The FY2026 budget dedicates $13.4 billion to AI and autonomy, covering unmanned aerial vehicles, maritime autonomous systems, and AI software integration.

  5. [5]
    Pentagon Unveils $1.01T FY2026 Budget with Cyber, Space, AI Focusmeritalk.com

    The FY2026 DoD budget request totals $1.01 trillion with a 13% increase over FY2025, including major investments in AI and the Defense Innovation Unit's budget raised to $2 billion.

  6. [6]
    600+ Workers Protest as Google Signs $200 Million Secret Pentagon AI Warfare Dealcommondreams.org

    Despite over 580 employee signatures including 20+ directors and senior DeepMind researchers, Google signed a $200 million classified AI deal with the Pentagon.

  7. [7]
    Pentagon uses GenAI.mil to create 100K agentsdefensescoop.com

    Users have built more than 100,000 AI agents on GenAI.mil, with 1.3 million active users out of 3 million with access, authorized at Impact Level 5.

  8. [8]
    Pentagon adds Google's latest model to GenAI.mil as usage soarsdefenseone.com

    GenAI.mil supports intelligence analysis, logistics planning, and administrative efficiency, with real-world examples of dramatic time savings across military functions.

  9. [9]
    Hegseth: Autonomous warfare sub-unified command coming soondefensescoop.com

    Secretary Hegseth announced at a House Armed Services Committee hearing that the Pentagon will shortly establish a sub-unified command for autonomous warfare.

  10. [10]
    SOUTHCOM to set up command focusing on autonomous warfarestripes.com

    U.S. Southern Command initiated the launch of a Southcom Autonomous Warfare Command to employ autonomous and unmanned platforms for increased lethality and domain awareness.

  11. [11]
    Pentagon strikes deals with 7 Big Tech companies after shunning Anthropiccnn.com

    Anthropic was excluded after refusing to allow unrestricted military use of Claude, insisting on guardrails against autonomous weapons and mass surveillance.

  12. [12]
    Pentagon Designates Anthropic a Supply Chain Risk — What Government Contractors Need to Knowmayerbrown.com

    Anthropic became the first American company designated a supply chain risk, a classification normally reserved for foreign adversaries, after refusing to remove AI safety restrictions.

  13. [13]
    Anthropic sues Pentagon over rare 'supply chain risk' labelaxios.com

    Anthropic filed suit against the Pentagon's supply chain risk designation after the company refused to allow unrestricted military use of its Claude AI model.

  14. [14]
    Judge blocks Pentagon's effort to 'punish' Anthropic by labeling it a supply chain riskcnn.com

    A federal judge ruled that the Pentagon's supply chain risk designation ran 'roughshod over Anthropic's constitutional rights' and temporarily blocked it.

  15. [15]
    Anthropic loses appeals court bid to temporarily block Pentagon blacklistingcnbc.com

    An appeals court reversed the lower court injunction, allowing the Pentagon's supply chain risk designation of Anthropic to stand while litigation continues.

  16. [16]
    Pentagon-Anthropic Dispute over Autonomous Weapon Systems: Potential Issues for Congresscongress.gov

    Congressional Research Service report flagging the Pentagon-Anthropic dispute as raising significant issues for Congress regarding government authority and AI company policy disagreements.

  17. [17]
    Google employees ask CEO Sundar Pichai to bar classified military AI workwashingtonpost.com

    More than 580 Google employees signed a letter arguing that on classified networks, Google cannot monitor AI use, making 'trust us' the only guardrail.

  18. [18]
    DoD Directive 3000.09: Autonomy in Weapon Systemsesd.whs.mil

    The directive requires autonomous weapons to allow 'appropriate levels of human judgment over the use of force,' with flexible standards varying by context.

  19. [19]
    Defense Primer: U.S. Policy on Lethal Autonomous Weapon Systemscongress.gov

    Section 1061 of the FY2026 NDAA requires congressional notification when waivers are issued under DoD Directive 3000.09 governing autonomous weapons.

  20. [20]
    Review of the 2023 US Policy on Autonomy in Weapons Systemshrw.org

    Human Rights Watch analysis finding that the directive's flexible language falls short of requiring 'meaningful human control' over autonomous weapons systems.

  21. [21]
    The Pentagon/Anthropic Clash Over Military AI Guardrailsopiniojuris.org

    Analysis of IHL obligations that many scholars argue implicitly require meaningful human control over target selection and engagement under Geneva Conventions.

  22. [22]
    How the Pentagon Can Manage the Risks of AI Warfareforeignpolicy.com

    Analysis of how commercial AI companies may bring safety cultures and red-teaming practices that improve military AI safety compared to traditional defense contractors.

  23. [23]
    Whistleblowing with a Security Clearanceworkplacefairness.org

    Federal law protects contractor whistleblowers, but disclosures involving classified information must follow proper channels. Loss of clearance often means loss of employment.

  24. [24]
    Militarizing AI: How to Catch the Digital Dragon?cigionline.org

    The US, China, and Russia are considered 'tier one' military AI powers, with China pursuing multi-domain precision warfare doctrine integrating AI across all military functions.

  25. [25]
    New Pentagon report on China's military notes Beijing's progress on LLMsdefensescoop.com

    December 2025 Pentagon report finding that China's commercial AI sectors made significant progress on large language models, narrowing the gap with US models.

  26. [26]
    Project Maven - Wikipediawikipedia.org

    Project Maven was established in April 2017 with $70 million initial funding, growing to $250 million by FY2021, applying computer vision to military surveillance.

  27. [27]
    'Growing demand' sparks DOD to raise Palantir's Maven contract to more than $1Bdefensescoop.com

    The Pentagon awarded Palantir a $480 million contract for Project Maven in 2024, later increasing the ceiling by $795 million due to growing demand.

  28. [28]
    DOD components face 'aggressive' timeline for Maven Smart System transitiondefensescoop.com

    Pentagon components face aggressive timelines for transitioning to the Maven Smart System, reflecting ongoing organizational challenges in defense AI programs.

  29. [29]
    OpenAlex: Research publications on military AI and autonomous weaponsopenalex.org

    Academic research on military AI and autonomous weapons peaked at 2,930 papers in 2024, with over 13,500 total papers published since 2011.