All revisions

Revision #1

System

10 days ago

The White House Wants Congress to Write America's AI Rulebook — But the Framework Is More Blueprint Than Law

On March 20, 2026, the White House released its National Policy Framework for Artificial Intelligence, a document that aims to define how the federal government — and by extension, Congress — should approach regulating a technology that is reshaping the U.S. economy at extraordinary speed [1]. The framework arrives at a moment when more than a dozen states have already passed their own AI laws, the European Union's AI Act is entering full enforcement, and China is deploying AI across its industrial base with billions in state-directed investment [2].

The document is not a regulation. It carries no legal force on its own. It is a set of legislative recommendations — a wish list from the executive branch to Capitol Hill — and its central message is clear: regulate lightly, regulate federally, and do not let the states go first [3].

What the Framework Actually Says

The framework outlines seven policy areas the administration wants Congress to address [1][4]:

Child safety tops the list. The framework calls for age-assurance requirements on AI platforms accessible to minors, limits on data collection for targeted advertising to children, and mandatory parental control tools. States would retain authority to enforce existing child-protection laws, including prohibitions on AI-generated child sexual abuse material [4].

Community infrastructure addresses the energy demands of AI data centers. The administration wants Congress to streamline federal permitting so data centers can generate power on-site and explicitly states that ratepayers should not bear the cost of AI infrastructure buildouts [4].

Intellectual property is where the framework takes its most contentious position. The administration asserts that training AI models on copyrighted material "does not violate copyright laws" and defers further resolution to the courts [4]. It supports voluntary licensing frameworks that would allow creators to negotiate compensation collectively without antitrust liability [1].

Free speech provisions prohibit government coercion of AI platforms to moderate content based on "partisan or ideological reasons" and establish redress mechanisms for individuals who believe federal agencies influenced their expression [4].

Competitiveness measures include regulatory sandboxes for AI experimentation and expanded access to federal datasets for AI training. The framework explicitly avoids creating any new federal AI regulatory body, instead relying on existing sector-specific agencies like the FTC, FCC, and FDA [1][4].

Workforce development calls for integrating AI literacy into education programs and expanding research on labor market impacts through land-grant universities [4].

Federal preemption is the framework's backbone. Congress should, the administration argues, establish a single national standard that preempts state AI laws imposing "undue burdens" on developers and deployers. States would be barred from regulating AI model development directly or holding developers liable for unlawful third-party uses of their systems [3][4].

No Enforcement, No Prohibitions, No Budget

Unlike the EU AI Act, which categorizes AI systems into risk tiers and outright bans certain applications — real-time biometric surveillance in public spaces, social scoring, manipulative AI targeting vulnerable populations — the White House framework contains no list of prohibited AI uses [5][6]. It designates no high-risk applications. It establishes no penalties for non-compliant companies.

The framework also allocates no funding. It is a set of recommendations, not a budget document. This stands in contrast to both the EU, which has committed resources to its European AI Office for enforcement, and to the administration's own broader R&D record. In fiscal year 2025, the federal government invested roughly $3.3 billion in non-defense AI research and development — far below the National Security Commission on AI's recommendation to reach $32 billion by FY2026 [7]. NIST received $55 million specifically for AI standards and safety research, a modest sum given the scope of the challenge [8].

U.S. Unemployment Rate (2024–2026)
Source: FRED / Bureau of Labor Statistics
Data as of Mar 25, 2026CSV

The enforcement mechanisms that do exist operate through a December 2025 executive order, not through the framework itself. That order directed the Attorney General to establish an AI litigation task force to challenge state AI laws in court, instructed the FTC to issue guidance on when state laws altering "truthful output of AI models" are preempted by federal law, and conditioned certain broadband infrastructure funding on states avoiding "onerous" AI regulations [9].

The Preemption Battle

The framework's emphasis on federal preemption responds to a genuine problem: regulatory fragmentation. By early 2026, states including California, Colorado, Illinois, and Texas had enacted AI-specific laws covering areas from automated employment decisions to algorithmic transparency [10]. Companies operating nationally face a patchwork of overlapping and sometimes conflicting requirements.

But the preemption push has drawn sharp criticism. Americans for Responsible Innovation argued the framework "shields developers from liability" and expressed alarm that it would simultaneously ban state laws and prevent "open-ended" liability for harms to children [11]. The organization Ropes & Gray noted that the executive order's preemption authority faces significant legal questions, particularly whether the administration can override state consumer protection laws without congressional action [10].

The constitutional question is real. Executive orders cannot preempt state law in the way federal statutes can. The framework acknowledges this implicitly by framing its recommendations as guidance for Congress rather than self-executing directives. But passing comprehensive AI legislation through both chambers remains a steep climb, particularly in a midterm election year [12].

Congress: Willing but Divided

House Republican leaders endorsed the framework immediately. Speaker Mike Johnson and Representatives Steve Scalise, Brett Guthrie, and Jim Jordan pledged to work "across the aisle" to enact legislation [11]. Senator Marsha Blackburn called the framework a "roadmap" and said she welcomed the administration to the discussion [12].

Several AI bills are already in the legislative pipeline. The AI PLAN Act addresses planning and logistics requirements. The CREATE AI Act focuses on research infrastructure. The Small AI Innovators Empowerment Act would support smaller AI businesses. And AI provisions are being folded into the must-pass defense authorization bill for fiscal year 2026 [12].

But bipartisan agreement on the substance remains elusive. Democrats have pushed for stronger consumer protections, mandatory algorithmic audits, and explicit prohibitions on discriminatory AI uses — provisions largely absent from the White House framework [12]. The administration's assertion that AI training on copyrighted material is non-infringing is opposed by significant blocs in both parties who represent creative industries [4].

How It Compares to the EU AI Act

The contrast with Europe's approach is stark. The EU AI Act, which entered into force in 2024 with full implementation rolling out through 2026, operates on a tiered risk classification system [5][6]:

  • Unacceptable risk AI is banned outright: social scoring by governments, real-time remote biometric identification in public spaces (with narrow exceptions), and AI that manipulates human behavior to circumvent free will [5].
  • High-risk AI — including systems used in hiring, credit scoring, law enforcement, and critical infrastructure — must undergo conformity assessments, maintain detailed technical documentation, and submit to human oversight requirements [5][6].
  • Limited risk systems face transparency obligations: chatbots must disclose they are AI, and deepfakes must be labeled [5].
  • Minimal risk AI, such as spam filters and video game AI, faces no additional regulation [5].

Penalties under the EU AI Act reach up to €35 million or 7% of global annual turnover for deploying banned AI systems [5].

The White House framework adopts none of this architecture. It proposes no risk tiers, no conformity assessments, no mandatory transparency requirements for general-purpose AI, and no penalties. Where the EU framework is prescriptive, the American framework is permissive. Where the EU creates new institutional capacity (the European AI Office), the American framework explicitly avoids it [4][6].

This divergence creates compliance challenges for companies operating in both markets. Firms subject to the EU AI Act will need to maintain those compliance systems regardless of U.S. rules, but the absence of aligned U.S. requirements means American-only companies face a fundamentally different regulatory environment [6].

The Competitiveness Argument

The administration's central economic argument is that light-touch regulation preserves American AI leadership against China. There is data to support the premise that the U.S. leads: American companies dominate in private AI investment ($67.2 billion vs. China's $43.8 billion), the U.S. has 2.4 times more private AI funding, and the best U.S. models outperform Chinese competitors by roughly 20% on software engineering benchmarks [13][14]. Chinese generative AI models are estimated to lag U.S. counterparts by three to six months [14].

U.S. vs. China: AI Investment and Research Output
Source: Morgan Stanley / Recorded Future
Data as of Mar 25, 2026CSV

But the relationship between regulation and competitiveness is more complex than the framework suggests. Five U.S. companies alone — Meta, Alphabet, Microsoft, Amazon, and Oracle — are expected to spend more than $450 billion in aggregate AI capital expenditures in 2026 [13]. This level of private investment dwarfs any regulatory cost the framework might impose or prevent. Critics argue the real risk to American competitiveness is not overregulation but underinvestment in public AI research: the federal government's $3.3 billion in non-defense AI R&D is a rounding error compared to private sector spending [7].

China's approach differs structurally. Beijing has implemented binding regulations on algorithmic recommendation systems, deepfakes, and generative AI, while simultaneously directing state investment into targeted industrial applications [5]. China produces 41,200 AI research papers annually to America's 28,400, suggesting regulation has not impeded its research output [14].

Workers Left Waiting

The framework's workforce provisions are its thinnest section. It calls for integrating AI into education and expanding research on labor market impacts, but it establishes no retraining programs, no job transition funding, and no requirements that companies provide advance notice before AI-driven layoffs [4].

This gap exists against a backdrop of significant projected displacement. The International Monetary Fund estimates roughly 40% of jobs globally face meaningful AI exposure, with that share rising to 60% in high-income countries [15]. McKinsey projects between 75 million and 375 million workers worldwide may need to switch occupational categories by 2030 [16]. Sectors most vulnerable include back-office processing, paralegal work, accounting, mortgage origination, and data entry [16].

State-level efforts have moved further. Minnesota is considering legislation that would require employers with 50 or more full-time workers to provide 90 days' notice before AI-driven layoffs, along with continued employment or equivalent wages during a transition period and employer-funded retraining [17]. The federal Workforce Innovation and Opportunity Act provides some training vouchers for displaced workers, but the program was designed for manufacturing-era dislocation, not AI-scale disruption [18].

The unemployment rate has ticked upward modestly, from 3.9% in early 2024 to 4.4% as of February 2026 [19], though attributing that movement specifically to AI adoption is difficult given broader economic forces.

Can Voluntary Commitments Hold?

The framework's reliance on existing regulators and industry self-governance raises questions about enforcement credibility. In 2025, twelve major AI companies published or updated Frontier AI Safety Frameworks — voluntary documents describing how they plan to manage risks as models become more capable [20]. But these commitments have no binding force, and critics point to instances where companies released capabilities that appeared to exceed their own stated safety thresholds [20].

Some enforcement action has occurred through existing authorities. The FTC's "Operation AI Comply" targeted deceptive AI marketing practices. Italy's data protection authority fined OpenAI €15 million for GDPR violations in training data processing [20]. These cases demonstrate that existing laws can reach AI harms — but they rely on agencies applying statutes written decades before large language models existed.

The International AI Safety Report, published in 2026, noted that "most risk-management initiatives remain voluntary" and that only a few jurisdictions have begun to formalize safety practices as legal requirements [20]. Third-party auditing of frontier AI systems remains nascent: researchers have proposed frameworks for independent evaluation, but no standardized auditing regime exists in the United States [20].

What Comes Next

The framework's fate depends on Congress. The administration has signaled it wants legislation "this year," but comprehensive AI regulation has never passed either chamber, and the 2026 midterm elections compress the legislative calendar further [1][12].

The most likely path for near-term federal AI policy runs through the defense authorization bill and agency rulemaking. The FCC has been directed to consider federal reporting standards for AI models within 90 days of the December 2025 executive order [9]. The FTC retains authority to bring enforcement actions under Section 5 of the FTC Act. And the executive order's AI litigation task force can challenge state laws in federal court [9].

But for the sweeping legislative vision the framework describes — a unified national standard preempting state law, with sector-specific guardrails for child safety and intellectual property — the pathway requires 60 Senate votes and a House majority willing to agree on specifics that have eluded consensus for years.

The framework is, in the end, a statement of principles from an administration that wants AI governed lightly and governed federally. Whether those principles become law, and whether they prove adequate to the scale of the technology they address, are questions that the next session of Congress — and the next few years of AI deployment — will answer.

Sources (20)

  1. [1]
    President Donald J. Trump Unveils National AI Legislative Frameworkwhitehouse.gov

    The White House released a National Policy Framework for AI on March 20, 2026, outlining seven policy areas to guide Congress in developing unified federal AI legislation.

  2. [2]
    Where AI Regulation is Heading in 2026: A Global Outlookonetrust.com

    Overview of global AI regulatory approaches in 2026, comparing US, EU, and Chinese frameworks and their divergent strategies.

  3. [3]
    White House rolls out national legislative AI framework that looks to trump state level rulestechradar.com

    The framework emphasizes federal preemption of state AI laws, arguing that uniform national standards are necessary to prevent regulatory fragmentation.

  4. [4]
    Trump Administration Releases National Policy Framework on Artificial Intelligencesullcrom.com

    Legal analysis of the framework's seven policy areas, including child safety, IP rights, workforce development, competitiveness, and federal preemption provisions.

  5. [5]
    EU AI Act | Shaping Europe's Digital Futureec.europa.eu

    The EU AI Act establishes a risk-based regulatory framework with prohibited practices, high-risk system requirements, and penalties up to €35 million or 7% of global turnover.

  6. [6]
    Comparing the EU AI Act to Proposed AI-Related Legislation in the USbusinesslawreview.uchicago.edu

    Comparative analysis showing the EU adopts strict, binding rules prioritizing fundamental rights while the US favors market forces and sector-specific regulation.

  7. [7]
    Innovation Lightbulb: Federal R&D Funding Matters for U.S. AI Leadershipcsis.org

    Federal non-defense AI R&D investment was roughly $3.3 billion in FY2025, far below the National Security Commission on AI's recommendation of $32 billion by FY2026.

  8. [8]
    NIST Secures $55 Million for AI Standards and Safety Researchgrantedai.com

    NIST received a 21% budget increase to $1.847 billion, including $55 million specifically for AI standards and safety research in FY2026.

  9. [9]
    Trump Administration Issues Executive Order on Federal AI Policy Framework and State Law Pre-emptionwilliamfry.com

    The December 2025 executive order established an AI litigation task force, FTC preemption guidance, FCC reporting standards, and conditions on broadband funding for states.

  10. [10]
    Examining the Landscape and Limitations of the Federal Push to Override State AI Regulationropesgray.com

    Analysis of constitutional and legal questions surrounding federal preemption of state AI laws, including limits of executive authority without congressional legislation.

  11. [11]
    White House releases regulatory vision for AInextgov.com

    Industry groups BSA and NetChoice endorsed the framework; Americans for Responsible Innovation criticized it for shielding developers from liability.

  12. [12]
    White House urges Congress to take a light touch on AI regulations in new legislative blueprintboston.com

    House Republican leaders endorsed the framework; passing comprehensive AI legislation faces challenges in a midterm election year with deep partisan divisions.

  13. [13]
    Investing in U.S. vs China AImorganstanley.com

    Five U.S. companies are expected to spend over $450 billion in AI capex in 2026; the US dominates with $67.2B in AI investment vs China's $43.8B.

  14. [14]
    US-China AI Gap: Analysis of Model Performance, Investment, and Innovationrecordedfuture.com

    Best US AI models outperform China's best by 20% on engineering benchmarks; Chinese models lag by 3-6 months. China produces 41,200 AI papers annually vs 28,400 for the US.

  15. [15]
    AI Will Transform the Global Economyimf.org

    The IMF estimates 40% of global jobs face meaningful AI exposure, rising to 60% in high-income countries.

  16. [16]
    Jobs Lost, Jobs Gained: What the Future of Work Will Mean for Jobs, Skills, and Wagesmckinsey.com

    McKinsey projects 75-375 million workers worldwide may need to switch occupational categories by 2030, with back-office processing and data entry most vulnerable.

  17. [17]
    Minnesota bill would give AI-displaced workers 90 days notice, paid retraininghrlawcanada.com

    Minnesota proposed legislation requiring employers with 50+ workers to provide 90 days' notice before AI-driven layoffs plus employer-funded retraining.

  18. [18]
    AI labor displacement and the limits of worker retrainingbrookings.edu

    Analysis of retraining program limitations, noting the federal Workforce Innovation and Opportunity Act was designed for manufacturing-era dislocation.

  19. [19]
    Unemployment Rate - FREDfred.stlouisfed.org

    U.S. unemployment rate rose from 3.9% in early 2024 to 4.4% in February 2026.

  20. [20]
    2026 International AI Safety Report: Executive Summaryinternationalaisafetyreport.org

    Most AI risk-management initiatives remain voluntary; 12 companies published Frontier AI Safety Frameworks in 2025 but enforcement and third-party auditing remain nascent.