All revisions

Revision #1

System

about 15 hours ago

Inside the Pentagon's $13.4 Billion Bet on AI-First Warfare — and the Risks It's Not Talking About

On May 1, 2026, Secretary of Defense Pete Hegseth announced agreements with eight major technology companies — Amazon Web Services, Google, Microsoft, Nvidia, OpenAI, Oracle, Reflection, and SpaceX — to deploy artificial intelligence on the Pentagon's most sensitive classified networks [1]. "We will become an 'AI-first' warfighting force across all domains," Hegseth declared [2]. The deals authorize these companies to run their AI systems on Impact Level 6 and Impact Level 7 environments — the highest tiers of Defense Department cloud security architecture, where actual war-fighting decision support operates [3].

The announcement caps months of policy fights, corporate maneuvering, and a record budget request that together represent the most aggressive push to embed AI into military operations in U.S. history. Whether that push delivers on its promises — or repeats the Pentagon's long record of overpromising on technology transformations — depends on answers to questions the Defense Department has not yet fully addressed.

The Money: A 330% Budget Spike in One Year

The FY2026 defense budget request includes $13.4 billion specifically for artificial intelligence and autonomy — the first time the DoD has created a dedicated budget line for these technologies [4]. That figure represents a more than fourfold increase from the $3.1 billion allocated in FY2025, and a twelvefold increase from the $1.1 billion spent in FY2022 [5].

U.S. Defense AI & Autonomy Budget (Billions USD)
Source: DoD Budget Materials / Breaking Defense
Data as of May 1, 2026CSV

Of the $13.4 billion, $9.4 billion is earmarked for aerial autonomous systems alone. Additional allocations include $250 million for an AI "ecosystem," $250 million for Cyber Command AI initiatives, $200 million for AI and automation technology, and $124 million for AI testing capabilities [4]. These figures sit within a total defense budget request of $1.01 trillion — a 13% increase over FY2025 enacted levels, with $961.6 billion allocated to the DoD [5].

How Does U.S. Spending Compare?

Comparing AI-specific military spending across nations is difficult because neither China nor Russia publishes granular breakdowns equivalent to the Pentagon's new AI line item. What is known: China's total defense spending reached an estimated $336 billion in 2025, while Russia spent approximately $190 billion, with the Kremlin increasing defense R&D spending by 33% [6]. Both countries are investing in autonomous systems, but neither has publicly announced anything comparable to the Pentagon's dedicated $13.4 billion AI allocation.

2025 Military Spending: Top 3 Nations (Billions USD)
Source: SIPRI Military Expenditure Database
Data as of Apr 27, 2026CSV

The Stockholm International Peace Research Institute's April 2026 report found global military spending surging to levels not seen in 16 years, with European and Asian expenditures climbing sharply [6]. Within this environment, the U.S. AI-first push represents both a spending leader and a potential catalyst for an AI arms race.

What's Already Running — and What's Gone Wrong

The Pentagon's existing AI footprint extends beyond back-office tools. Over 1.3 million Defense Department personnel have used the unclassified GenAI.mil platform since its launch, generating tens of millions of prompts and deploying hundreds of thousands of AI agents in five months [3]. The platform, initially built on Google's Gemini model, handles tasks from intelligence sorting to simulation planning [7].

The most significant operational AI system is Project Maven, originally an experimental imagery analysis program that has evolved into permanent infrastructure. Palantir Technologies holds the Maven contract, which has expanded through multiple awards: a five-year, $480 million Army contract in 2024, a $100 million expansion later that year, and a 2025 modification valued at up to $795 million [8].

But the real-world record of AI-assisted military targeting carries warnings. In the U.S. conflict with Iran in early 2026, AI systems including Maven were used to rapidly process intelligence and identify targets. Reports indicate that algorithmic errors contributed to strikes on civilian sites, including an elementary school in Minab, Iran, that killed at least 175 people, most of them children [9][10].

The Gaza Precedent

The most extensively documented case of AI-assisted military targeting comes from Israel's operations in Gaza. The Israel Defense Forces used two AI systems — "Gospel," which identifies buildings, and "Lavender," which identifies individuals and places them on kill lists [11]. Lavender at one point listed as many as 37,000 Palestinian men as targets linked to Hamas or Palestinian Islamic Jihad.

According to an investigation by +972 Magazine, an Israeli intelligence officer described the human review process as "not feasible at all," with another source saying they spent roughly 20 seconds per target — serving as a "rubber stamp" for AI-generated recommendations [11]. The IDF reportedly authorized killing up to 15-20 civilians per junior Hamas operative and over 100 civilians per senior commander [11]. The system's developers acknowledged a 10% error rate — meaning roughly one in ten people flagged for killing were not legitimate military targets [11].

These cases form the most concrete evidence available on what happens when AI systems are integrated into targeting decisions at scale.

The Legal Framework: 'Meaningful Human Control' Under Strain

DoD Directive 3000.09, first issued in 2012 and updated in January 2023, governs autonomy in weapon systems. It requires that weapons be "designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force" [12].

Legal scholars and human rights organizations have identified several weaknesses in the directive. The Harvard Law School International Human Rights Clinic found that the 2023 update replaced the word "shall" (indicating legal obligation) with "will" (indicating intent) — a subtle but significant weakening of enforcement language [13]. The directive does not define what constitutes an "appropriate level" of human judgment, and the term "control" was removed from the updated version entirely [12].

A 2019 Congressional Research Service report found that "no weapon system is known to have gone through the senior level review process" required by the directive [14]. U.S. policy does not prohibit lethal autonomous weapon systems; it only requires a secondary senior-level review before development, involving the Under Secretary of Defense for Policy, the Vice Chairman of the Joint Chiefs of Staff, and the Under Secretary of Defense for Research and Engineering [14].

The tension between human oversight and AI speed is structural. As the ICRC's Law and Policy Blog noted, "the speed of AI decision-making compresses the window for human intervention, while the opacity of algorithms complicates post hoc accountability" [15]. If the purpose of AI-first warfare is to outpace adversaries, meaningful human review becomes a bottleneck — one that operational pressures will push commanders to eliminate.

The Anthropic Dispute: A Window Into What 'Lawful Use' Means

The most revealing conflict in the AI-first push involved Anthropic, the AI company conspicuously absent from the Pentagon's eight-company agreement. The dispute centered on contract terms: the Pentagon, under Hegseth and Pentagon CTO Emil Michael, demanded unrestricted access to Anthropic's technology for "all lawful purposes." Anthropic insisted on specific contractual prohibitions against use in mass surveillance of American citizens and autonomous weapon systems [16].

When negotiations broke down on February 27, Hegseth designated Anthropic a "supply chain risk" — a classification normally reserved for foreign adversaries like Huawei [3]. President Trump ordered a six-month phaseout of all Anthropic products across the federal government [16].

Hours after the ban, OpenAI announced its own Pentagon deal. OpenAI stated three red lines: no mass domestic surveillance, no directing autonomous weapons systems, and compliance with applicable law [17]. The Electronic Frontier Foundation criticized these as "weasel words," arguing that the language provided no enforcement mechanism beyond existing legal requirements [18]. MIT Technology Review reported that the distinction was significant: Anthropic sought contractual prohibitions enforceable in court, while OpenAI cited existing law — which the company has no independent power to enforce [19].

A federal judge in California subsequently blocked the government's supply-chain-risk designation against Anthropic, and the White House reportedly reopened discussions with the company in recent weeks [16].

Who Profits: Contractors and Conflicts of Interest

The AI-first transition is reshaping the defense industrial base. Venture capital-backed firms like Palantir, Anduril, and Scale AI are increasingly displacing traditional "Big Five" contractors (Lockheed Martin, Boeing, RTX, General Dynamics, Northrop Grumman) for AI-related work [20].

Palantir alone holds Maven contracts worth over $1.3 billion. Anduril, founded by Palmer Luckey (creator of the Oculus VR headset), has emerged as a leading supplier of autonomous systems. The two companies, along with others, formed a defense-tech consortium in late 2024 to jointly bid on Pentagon contracts [21].

Conflicts of interest span personnel, policy, and investment. Founders Fund partner Trae Stephens, a former Palantir employee, chairs Anduril's board. Former Director of National Intelligence Avril Haines was a highly paid Palantir consultant before overseeing intelligence agencies that contracted with the company [20]. Reports have indicated that Palantir CTO Shyam Sankar has been considered for senior Pentagon R&D roles [20], a move that would make a vendor's executive a buyer's decision-maker.

The Responsible Statecraft project has raised concerns about whether this consolidation is replacing one monopolistic defense industrial base with another — trading Boeing's dominance for Palantir's, with the added complication that software companies' products are harder for Congress to audit than physical weapons systems [22].

Workforce Displacement: Who Loses Their Job?

The Pentagon has not published a comprehensive projection of how many uniformed military jobs will be automated, restructured, or eliminated under AI-first doctrine. What exists is piecemeal.

The Army created a new Military Occupational Specialty (49B) for AI and machine learning specialists in late 2025, signaling institutional acknowledgment that AI requires dedicated career fields [23]. The Air Force Academy has adapted its curriculum as AI shrinks entry-level jobs across the force [24]. The Pentagon's Cyber Registered Apprenticeship Program (Cyber RAP) offers a 12-month paid apprenticeship for cybersecurity roles [25].

But the scale of potential displacement dwarfs these programs. Routine administrative, data-entry, and middle-skill technical tasks are most exposed to automation, while roles requiring complex judgment and interpersonal skills are more resilient [24]. A Federal News Network analysis noted that "even in human-machine teaming experiments, the Air Force message centered on combining operator judgment with AI speed, not replacing people" [26] — but that framing sidesteps whether aggregate personnel requirements will shrink as AI handles tasks that once required dozens of analysts, logisticians, or intelligence specialists.

No funded separation or retraining program at scale has been announced for the hundreds of thousands of service members in potentially affected roles.

Cybersecurity: The Vulnerabilities Nobody Wants to Discuss

The Pentagon's own leadership has acknowledged that AI systems face unique security risks. The Record reported that the Defense Department "is increasingly dependent on privately-developed software systems that were not originally designed for military use — raising concerns about vulnerabilities, supply chain risks and the potential for adversaries to exploit them" [27].

The known attack surface includes:

  • Data poisoning: Adversaries subtly alter training data to implant backdoors that can be exploited later. These alterations can be undetectable during normal quality assurance [27].
  • Adversarial inputs: Small, carefully crafted perturbations to sensor data — images, signals, radar returns — can cause AI systems to misclassify targets. A system that correctly identifies a tank 99% of the time can be fooled by modifications invisible to human operators.
  • Model inversion and extraction: Attackers can reverse-engineer a model's training data or replicate its decision-making by observing its outputs, potentially exposing classified intelligence sources.
  • Prompt injection: For large language model-based tools like GenAI.mil, adversaries could manipulate inputs to extract sensitive information or alter system behavior.

The Georgetown Center for Security and Emerging Technology has documented these categories of risk and noted that red-team findings on military AI systems have not been publicly disclosed [28]. Foreign Policy's analysis argued that the Pentagon faces a dilemma: public disclosure of red-team findings would help adversaries, but withholding them prevents Congressional oversight and independent security review [29].

The Track Record: Why Previous Pentagon Tech Revolutions Fell Short

The AI-first declaration is not the Pentagon's first promise of technological transformation. In the 1990s, the "Revolution in Military Affairs" (RMA) promised that information technology and precision weapons would make U.S. forces overwhelmingly dominant. In the 2000s, "network-centric warfare" — coined by Vice Admiral Arthur Cebrowski and John Garstka in 1998 — promised that networked forces would achieve decisive advantage through information superiority [30].

Both fell short of their stated ambitions. The RMA thesis was undermined by the U.S. military's inability to defeat insurgencies in Iraq and Afghanistan — asymmetric conflicts where technological superiority proved insufficient [30]. Network-centric warfare produced expensive systems that often failed to interoperate, and critics argued the doctrine had ignored how adversaries would adapt to networked forces [30].

The pattern is consistent: the Pentagon announces a technology-driven transformation, industry mobilizes to capture contracts, budgets spike, and the resulting systems deliver incremental improvements rather than the promised overhaul. Whether AI breaks this cycle depends on factors that remain unproven: whether current AI systems are reliable enough for high-stakes military decisions, whether the Pentagon can secure them against sophisticated adversaries, and whether Congress will demand measurable benchmarks before approving continued funding.

What Congress Should Demand

Several concrete oversight measures would distinguish this initiative from its predecessors:

  1. Published error rates for every AI system used in targeting or decision support, updated quarterly.
  2. Independent red-team reports submitted to the Armed Services and Intelligence committees, with declassified summaries released publicly.
  3. Workforce impact assessments by Military Occupational Specialty, branch, and rank, with funded retraining programs scaled to projected displacement.
  4. Contractual audit rights over AI systems deployed on classified networks, including source code review and algorithmic bias testing.
  5. Annual progress benchmarks tied to continued funding — specific, measurable criteria for what "AI-first" means in operational terms, not just procurement milestones.

Without these safeguards, the $13.4 billion AI-first push risks becoming the latest in a series of Pentagon technology bets that enrich contractors, consume budgets, and deliver far less than advertised — this time with the added stakes of autonomous systems making life-and-death decisions at machine speed.

Sources (30)

  1. [1]
    Pentagon announces deal with seven AI companies for classified systemsaljazeera.com

    The Pentagon signed agreements with eight major AI companies to deploy technology on classified military networks as part of its AI-first strategy.

  2. [2]
    Understanding the Pentagon's push to become an 'AI-first fighting force'washingtonexaminer.com

    Secretary of Defense Hegseth declared the U.S. will become an 'AI-first' warfighting force across all domains.

  3. [3]
    Pentagon strikes deals with 8 Big Tech companies after shunning Anthropiccnn.com

    Eight tech companies signed classified-network AI deals with the Pentagon; Anthropic excluded after refusing unrestricted use terms.

  4. [4]
    Pentagon Seeks $13.4B for AI and Autonomy FY 2026 Budget Requestcdomagazine.tech

    The Pentagon's FY2026 budget includes a record $13.4 billion for AI and autonomy, with $9.4 billion for aerial autonomous systems.

  5. [5]
    Pentagon formally unveils $961.6 billion budget for 2026breakingdefense.com

    The FY2026 defense budget totals $1.01 trillion including reconciliation funding, a 13% increase over FY2025.

  6. [6]
    Global military spending rise continues as European and Asian expenditures surgesipri.org

    SIPRI reports global military spending has risen to levels not seen in 16 years, with China at $336B and Russia at $190B in 2025.

  7. [7]
    U.S. military to use Google Gemini for new AI platformaxios.com

    The Pentagon launched GenAI.mil built on Google Gemini, now used by over 1.3 million DoD personnel.

  8. [8]
    Pentagon Expands Use of Palantir AI in New Defense Contractmilitary.com

    Palantir's Maven contract expanded to over $1.3 billion through multiple awards including a $795 million 2025 modification.

  9. [9]
    US Military Uses Palantir AI System in Iran War, Leading to Civilian Casualtiesoecd.ai

    AI systems including Project Maven were used in the Iran conflict; algorithmic errors contributed to strikes on civilian sites.

  10. [10]
    AI Targeting Needs a Human Eyeforeignpolicy.com

    Airstrikes hit an elementary school in Minab, Iran, killing at least 175 people, most children, with AI-assisted targeting implicated.

  11. [11]
    'Lavender': The AI machine directing Israel's bombing spree in Gaza972mag.com

    Investigation reveals Israel's Lavender AI listed 37,000 Palestinians for targeting with minimal human oversight and a 10% acknowledged error rate.

  12. [12]
    Department of Defense Directive 3000.09wikipedia.org

    DoD Directive 3000.09 governs autonomy in weapon systems, requiring 'appropriate levels of human judgment' without defining what that means.

  13. [13]
    Review of the 2023 US Policy on Autonomy in Weapons Systemshumanrightsclinic.law.harvard.edu

    Harvard Law School IHRC found the 2023 directive weakened language from 'shall' to 'will' and removed the term 'control' entirely.

  14. [14]
    Defense Primer: U.S. Policy on Lethal Autonomous Weapon Systemscongress.gov

    CRS found no weapon system has gone through the senior-level review required by DoD Directive 3000.09.

  15. [15]
    The risks and inefficacies of AI systems in military targeting supporticrc.org

    ICRC analysis warns AI decision-making compresses windows for human intervention while algorithmic opacity complicates accountability.

  16. [16]
    OpenAI announces Pentagon deal after Trump bans Anthropicnpr.org

    Trump banned Anthropic from government use after the company refused unrestricted military access; OpenAI announced its own deal hours later.

  17. [17]
    Our agreement with the Department of Waropenai.com

    OpenAI stated three red lines for its Pentagon deal: no mass domestic surveillance, no directing autonomous weapons, compliance with law.

  18. [18]
    Weasel Words: OpenAI's Pentagon Deal Won't Stop AI-Powered Surveillanceeff.org

    EFF criticized OpenAI's Pentagon agreement as lacking enforceable safeguards beyond existing legal requirements.

  19. [19]
    OpenAI's 'compromise' with the Pentagon is what Anthropic fearedtechnologyreview.com

    MIT Technology Review reported Anthropic sought contractual prohibitions enforceable in court, while OpenAI relied on existing legal requirements.

  20. [20]
    New monopoly? Inside VC tech's overthrow of the primesresponsiblestatecraft.org

    Analysis warns VC-backed defense tech firms like Palantir and Anduril may replace traditional defense monopolies with new ones harder for Congress to audit.

  21. [21]
    Defense tech firms establish AI-focused consortiumdefensenews.com

    Palantir, Anduril, and other defense-tech firms formed a consortium to jointly bid on Pentagon AI contracts.

  22. [22]
    Inside VC tech's overthrow of the primesresponsiblestatecraft.org

    Conflicts of interest span Palantir, Anduril board members, and former intelligence officials who consulted for the same firms they later oversaw.

  23. [23]
    Army creates AI career field, pathway for officers to joindefensescoop.com

    The Army established MOS 49B for AI and machine learning specialists as part of its data-focused transformation.

  24. [24]
    AI Is Shrinking Entry-Level Jobs as 'New Ivies' Adapt, Including the Air Force Academymilitary.com

    AI is reshaping military hiring and shrinking entry-level jobs; the Air Force Academy is adapting its curriculum in response.

  25. [25]
    Pentagon Launches Cyber Apprenticeship Programexecutivegov.com

    The Cyber RAP program offers 12-month paid apprenticeships combining instruction, lab work, and mentorship for cybersecurity roles.

  26. [26]
    How AI could change front-line military jobsfederalnewsnetwork.com

    Analysis notes the military message centers on combining human judgment with AI speed, not replacing people — but aggregate staffing may shrink.

  27. [27]
    Pentagon grapples with securing AI as it moves toward autonomous warfaretherecord.media

    The Pentagon faces risks from data poisoning, adversarial inputs, and privately developed AI systems not originally designed for military use.

  28. [28]
    What Does AI Red-Teaming Actually Mean?cset.georgetown.edu

    Georgetown CSET documents categories of AI security risk including adversarial attacks and data poisoning in military contexts.

  29. [29]
    How the Pentagon Can Manage the Risks of AI Warfareforeignpolicy.com

    Analysis argues the Pentagon faces a dilemma between disclosing AI red-team findings and protecting them from adversaries.

  30. [30]
    Revolution in military affairswikipedia.org

    The 1990s RMA and 2000s network-centric warfare promises fell short when confronted with asymmetric conflicts in Iraq and Afghanistan.