All revisions

Revision #1

System

14 days ago

The Battle Over Who Regulates AI: Inside the Trump Administration's Push to Override State Laws

On March 20, 2026, the White House released a four-page national AI policy framework calling on Congress to establish uniform federal standards for artificial intelligence — and to preempt the growing patchwork of state laws that the administration argues is holding back American competitiveness [1]. The framework, paired with a 291-page draft bill from Senator Marsha Blackburn and backed by an unprecedented lobbying campaign from the technology industry, has ignited a constitutional fight over who gets to set the rules for a technology that is already reshaping the American workforce.

At stake are protections for hundreds of millions of Americans currently covered by state AI laws, the enforcement authority of 36 state attorneys general who have pledged to oppose federal preemption, and the question of whether a "minimally burdensome" federal standard can adequately address the risks of algorithmic bias, privacy violations, and job displacement that states have been racing to regulate [2][3].

What the Framework Actually Says

The White House framework is organized around seven pillars: protecting children, safeguarding communities, respecting intellectual property, preventing censorship, enabling innovation, workforce development, and — most consequentially — "establishing a federal policy framework preempting cumbersome state laws" [1].

On child safety, the framework calls for parental controls over privacy, content, and screen time, proposing "commercially reasonable, privacy protective, age assurance requirements" rather than strict age verification mandates [4]. Critics have characterized this as shifting the burden of online child safety from technology companies to parents [5].

The framework explicitly opposes creating a new AI regulatory agency, instead directing existing bodies like the Federal Trade Commission and the Department of Energy to oversee compliance [4]. It advocates for regulatory "sandboxes" — exemption programs lasting up to 10 years — that would allow companies to operate outside certain federal rules while testing AI systems [4]. The White House Office of Science and Technology Policy would oversee these exemptions.

On liability, the framework advises Congress to avoid "ambiguous standards about permissible content, or open-ended liability" for AI companies [4]. It opposes holding AI developers responsible for third-party misuse of their products — a position that aligns closely with what the technology industry has lobbied for [6].

The Blackburn Bill: 291 Pages of Federal Authority

Two days before the White House framework dropped, Senator Marsha Blackburn introduced the TRUMP AMERICA AI Act — formally, "The Republic Unifying Meritocratic Performance Advancing Machine Intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act" — a 291-page discussion draft that would translate the administration's vision into legislation [7][8].

The bill creates several enforcement mechanisms absent from the executive framework. It establishes a "duty of care" requiring AI developers to "prevent and mitigate foreseeable harm to users," mandates third-party bias audits for high-risk AI systems used in employment, healthcare, education, and law enforcement, and authorizes enforcement by both the FTC and state attorneys general [7][9].

In a move that surprised observers on both sides, the bill would sunset Section 230 of the Communications Decency Act, the legal shield that has protected internet platforms from liability for user-generated content since 1996. It also creates a private right of action allowing individuals to sue companies for using personal or copyrighted data in AI training without consent [7][10].

However, the bill does not include specific budget allocations or staffing levels for the federal agencies that would take on new oversight responsibilities — a gap that consumer advocates have flagged as a sign that enforcement may be underfunded [9].

The State Laws at Risk

The federal preemption push arrives at a moment of intense state-level activity. In 2025 alone, state legislatures introduced more than 1,200 AI-related bills, and 27 states enacted 73 new AI laws, according to the Transparency Coalition [11]. As of March 2026, lawmakers in 45 states have introduced another 1,561 AI-related bills [12].

Several state laws that would be directly affected stand out:

Colorado's AI Act, the first comprehensive state law addressing algorithmic discrimination, requires deployers of high-risk AI systems to use "reasonable care" to protect consumers from discriminatory outcomes in hiring, banking, and education. Violations constitute unfair trade practices punishable by fines of up to $20,000 each. Its implementation was delayed to June 30, 2026 — but the federal framework could render it moot before it fully takes effect [13][14].

California enacted 13 AI-related laws in 2025, the most of any state. These include Senate Bill 53, which requires frontier AI developers with annual revenues exceeding $500 million to publish frameworks for managing catastrophic risks, and Assembly Bill 2013, which mandates disclosure of training datasets. California also restricted police use of AI for incident reports and prevented defendants from claiming AI acted autonomously to cause harm [15][16].

Illinois' Biometric Information Privacy Act (BIPA), while predating the current AI wave, has become a key tool for regulating AI-powered facial recognition and biometric data collection. It remains the most protective biometric privacy law in the country, with a private right of action and penalties up to $5,000 per violation [17].

The federal framework would preempt state regulation of frontier AI development and "largely" preempt state regulation of digital replicas, while preserving state authority over "generally applicable" laws, data center zoning, and government procurement [4][9]. The practical boundaries of "generally applicable" remain unclear and are likely to be contested in court.

$92 Million and 3,500 Lobbyists

The framework did not materialize in a policy vacuum. The AI industry mounted an extraordinary lobbying campaign in the months preceding its release.

Lobbying firms reported $92 million in earnings from AI-related influence work in the first three quarters of 2025, with a record $37.2 million in the fourth quarter alone — a 38 percent jump from the same period a year earlier [6][18]. More than 3,500 federal lobbyists — one in four of those working in Washington — reported lobbying on AI issues at least once in 2025, a 170 percent increase over three years [19].

Top AI Lobbying Organizations by Number of Lobbyists (2025)
Source: Public Citizen
Data as of Mar 21, 2026CSV

The U.S. Chamber of Commerce deployed the most AI lobbyists of any organization, at 91, followed by Microsoft with 63, Meta with 55, Intuit with 51, and Amazon with 48 [19]. Meta spent a record $19.7 million on federal lobbying in the first nine months of 2025 and launched a Super PAC called the American Technology Excellence Project to support tech-friendly candidates in state elections [6][18].

The industry's core argument: a fragmented state regulatory landscape is making the U.S. uncompetitive against China. "A light-touch regulatory environment is essential for innovation," said NetChoice director Patrick Hedger in response to the framework [4]. The Center for Data Innovation praised the administration for avoiding "alarmism" on job losses and copyright [4].

But consumer groups challenged this framing. Brad Carson of Americans for Responsible Innovation warned the framework would "offer another chance for tech companies to launch harmful products with no accountability" [4]. Public Citizen, which tracked the lobbying data, recommended that Congress "reject blanket preemption and deregulatory sandboxes" in favor of "enforceable, sector-specific guardrails" [19].

The Competitiveness Question

The administration's central justification for preemption — that state regulations are crippling American AI competitiveness — faces scrutiny from experts who say the evidence is thin.

The U.S. continues to lead China in AI chip production, model quality, and private-sector investment, according to analyses from the Council on Foreign Relations and the Foreign Policy Research Institute [20][21]. White House AI czar David Sacks has said Chinese AI models are "three to six months behind" the U.S. [22]. China has taken a different strategic approach, treating AI more as public infrastructure to maximize adoption rather than optimizing for commercial margins [21].

No major AI company has publicly documented lost investments or cancelled innovations specifically because of state-level regulation. The industry's complaints have focused on compliance costs and legal uncertainty from navigating different state requirements — real operational burdens, but distinct from the claim that state laws are causing the U.S. to fall behind internationally [20].

The European Union, which enacted the AI Act — the world's most comprehensive AI regulatory framework — continues to attract AI investment despite imposing requirements that go well beyond any U.S. state law [20]. This complicates the argument that regulation per se drives capital away from a market.

Who Loses Protections

The framework arrives as AI-driven automation accelerates across the economy. Nearly 55,000 U.S. job cuts were directly attributed to AI in 2025, according to outplacement firm Challenger, Gray & Christmas [23]. The World Economic Forum projects that 92 million jobs globally will be displaced by AI by 2030, partially offset by 170 million new roles created [23].

U.S. Total Nonfarm Employment (Jan 2024 – Feb 2026)
Source: Bureau of Labor Statistics
Data as of Mar 21, 2026CSV

The Bureau of Labor Statistics has identified approximately 5 to 6 million U.S. workers — 3.9 percent of the workforce — who sit at the intersection of high AI exposure and low capacity to adapt, concentrated in customer service, data entry, and administrative roles [24][25]. An estimated 80 percent of customer service positions, roughly 2.24 million out of 2.8 million U.S. jobs, are projected to be automated [23].

State laws like Colorado's AI Act were designed to address these risks by requiring companies to audit AI systems used in consequential decisions about employment, credit, and housing. The federal framework's bias audit requirements cover similar ground but leave enforcement to an FTC that has not received additional funding or staffing for the task [9]. The Blackburn bill mandates audits for high-risk systems but does not specify audit frequency, methodology standards, or consequences for failing an audit beyond the general "duty of care" obligation [7].

Workers in states with existing protections — particularly in Colorado, Illinois, and California — could see their current legal remedies displaced by federal standards that are less specific and, in some cases, less enforceable. Illinois' BIPA, for example, allows individuals to sue directly for biometric privacy violations; the federal framework's enforcement model relies primarily on the FTC and state attorneys general rather than private lawsuits [9][17].

The Legal Fight Ahead

The constitutional battle lines are already drawn. On November 25, 2025 — before the framework was even released — a bipartisan coalition of 36 state attorneys general, led by Connecticut AG William Tong, sent a letter to congressional leaders urging them to reject any ban on state AI laws [3]. The coalition included attorneys general from states as politically diverse as Idaho, Mississippi, California, and New York [3].

The attorneys general argued that states need to maintain their authority to address AI-related harms including fraud, deepfakes, algorithmic rent-setting, and harmful interactions with minors. "AI can be used to perpetrate scams, distort reality, and engage in inappropriate or harmful interactions with users," the coalition wrote [3].

The administration has its own legal weapon: the December 2025 executive order directed the Attorney General to establish an "AI Litigation Task Force" specifically to challenge state AI laws on grounds that they unconstitutionally regulate interstate commerce or are preempted by federal regulation [2].

But legal scholars have noted fundamental limits to this approach. The Supreme Court has consistently held that only Congress — not the executive branch — can preempt state law under Article I of the Constitution [2][26]. An executive order is neither a statute nor a regulation and cannot, on its own, override state legislation. Congress has twice rejected preemption provisions in AI-related legislation, which courts could interpret as evidence that Congress does not intend to displace state authority [26].

If the Blackburn bill or similar legislation passes, states would likely challenge it on Tenth Amendment grounds, arguing that consumer protection has historically fallen within state police powers. The success rate of such challenges varies by context, but the Supreme Court has shown increasing skepticism toward broad federal preemption claims in recent terms [26].

What Happens Next

The framework requires congressional action on all seven of its pillars to take effect. The Blackburn bill is a discussion draft, not yet formally introduced, and must navigate committee markups and floor votes in both chambers [8]. Speaker Mike Johnson and key committee chairs have pledged support, but Blackburn herself acknowledged the need for bipartisan backing — a high bar given Democratic opposition to weakening state protections [4].

In the meantime, existing state laws remain in effect. Colorado's AI Act is set for full implementation by June 30, 2026 [14]. California's suite of AI laws took effect January 1, 2026 [15]. Illinois continues to enforce BIPA. State attorneys general retain their enforcement authority unless and until Congress acts [3].

The transition period creates its own uncertainty. Companies operating nationally must decide whether to comply with state laws that may soon be preempted, or to begin operating under the federal framework's looser standards in anticipation of preemption — a legal gamble that could expose them to state enforcement actions [26].

What is clear is that the outcome of this fight will determine the regulatory architecture for AI in the United States for years to come. The question is no longer whether AI will be regulated, but at what level of government, with what enforcement tools, and whose interests the rules will primarily serve.

Sources (26)

  1. [1]
    White House AI framework calls for preemption of state lawsrollcall.com

    The White House on Friday proposed its framework for a national artificial intelligence policy, pushing for broad preemption of state AI laws.

  2. [2]
    AI Executive Order Targets State Laws and Seeks Uniform Federal Standardslw.com

    The EO directs the Attorney General to form an AI Litigation Task Force to challenge state AI laws inconsistent with U.S. global AI dominance.

  3. [3]
    Bipartisan Coalition of 36 State Attorneys General Opposes Federal Ban on State AI Lawsnaag.org

    A bipartisan coalition of 36 state attorneys general sent a letter to Congress opposing the proposed federal ban on state-level AI regulation.

  4. [4]
    AI draft bill would revamp online landscaperollcall.com

    The White House framework addresses seven categories including child safety, copyright, deepfakes, regulatory sandboxes, and state preemption.

  5. [5]
    Trump's AI framework targets state laws, shifts child safety burden to parentstechcrunch.com

    The framework lays out lighter-touch rules for tech companies and shifts child safety responsibilities toward parents.

  6. [6]
    As Big Tech Gears Up for the 2026 Midterms, Its Lobbying Operations Continue Unabatedissueone.org

    Seven of the largest tech companies spent a combined $50 million on federal lobbying in the first nine months of 2025.

  7. [7]
    The TRUMP AMERICA AI Act: Federal Preemption Meets Comprehensive Regulationjoneswalker.com

    The 291-page TRUMP AMERICA AI Act establishes a comprehensive federal framework with duty of care, bias audits, and broad preemption provisions.

  8. [8]
    Blackburn rolls out updated AI plan in bid to lead Trump's agendaaxios.com

    Senator Blackburn introduced her discussion draft of the TRUMP AMERICA AI Act on March 18, 2026.

  9. [9]
    TRUMP AMERICA AI Act Bill Sets Direction for Future US AI Regulationdataprivacy.foxrothschild.com

    The bill shifts enforcement to FTC and DOJ, mandates bias audits for high-risk AI, and creates new private rights of action for data use.

  10. [10]
    TRUMP AMERICA AI Act Repeals Section 230, Expands Liabilitymodernity.news

    The bill sunsets Section 230, creates multiple new avenues for liability including defective design and failure to warn claims.

  11. [11]
    Transparency Coalition releases 2025 State AI Legislation Reporttransparencycoalition.ai

    73 new AI laws were enacted across 27 states in 2025, with California leading at 13 laws followed by Texas with 8.

  12. [12]
    Artificial Intelligence 2025 Legislationncsl.org

    In 2025, over 1,200 AI-related bills were introduced across all 50 states, with 145 enacted into law.

  13. [13]
    SB24-205 Consumer Protections for Artificial Intelligenceleg.colorado.gov

    Colorado's AI Act requires deployers of high-risk AI systems to protect consumers from algorithmic discrimination.

  14. [14]
    Colorado AI Act Implementation Delayedbakerbotts.com

    Colorado delayed full implementation of its landmark AI Act to June 30, 2026.

  15. [15]
    California 2025 legislative wrap-up: More privacy and first-of-its-kind AI laws adoptediapp.org

    California enacted 13 AI-related laws in 2025, including frontier AI transparency and training data disclosure requirements.

  16. [16]
    Governor Newsom signs SB 53, advancing California's AI industrygov.ca.gov

    SB 53 requires frontier AI developers with revenues exceeding $500 million to publish frameworks for managing catastrophic risks.

  17. [17]
    Biometric Information Privacy Acten.wikipedia.org

    Illinois BIPA is the most protective biometric privacy law in the US, with a private right of action and penalties up to $5,000 per violation.

  18. [18]
    AI Lobbying Soars in Washington, Among Big Firms and Upstartsnews.bgov.com

    Lobbying firms reported a record $37.2 million in AI-related earnings in Q4 2025, a 38% jump from the prior year.

  19. [19]
    One in Four Federal Lobbyists Now Work on AIcitizen.org

    Over 3,500 lobbyists — 25% of all federal lobbyists — reported lobbying on AI in 2025, a 170% increase over three years.

  20. [20]
    The US AI Acceleration Plan vs China's Diffusion Modelfpri.org

    While the US focuses on commercial AI supremacy, China treats AI as public infrastructure to maximize adoption and social utility.

  21. [21]
    How 2026 Could Decide the Future of Artificial Intelligencecfr.org

    The U.S. leads China in AI chip production and model quality, though China is a formidable competitor in workforce scale and energy capacity.

  22. [22]
    Trump says the U.S. is winning the AI race with China. Here's what experts saypoynter.org

    White House AI czar David Sacks said Chinese AI models are three to six months behind the US.

  23. [23]
    77 AI Job Replacement Statistics 2026demandsage.com

    Nearly 55,000 job cuts were attributed to AI in 2025; the WEF projects 92 million jobs displaced globally by 2030.

  24. [24]
    Measuring US workers' capacity to adapt to AI-driven job displacementbrookings.edu

    Approximately 3.9% of U.S. workers — 5 to 6 million people — face high AI exposure with low adaptive capacity.

  25. [25]
    Incorporating AI impacts in BLS employment projectionsbls.gov

    BLS has begun incorporating AI displacement effects into official employment projections for affected occupational categories.

  26. [26]
    AI Regulation at a Crossroads: The Trump Administration's Preemption Pushworkforcebulletin.com

    The executive order is neither a statute nor a regulation and cannot on its own preempt state laws; Congress has twice rejected AI preemption provisions.