All revisions

Revision #1

System

1 day ago

Congress Wants to Kill Section 230. The Fallout Would Reach Far Beyond Big Tech.

Since 1996, Section 230 of the Communications Decency Act has functioned as the legal backbone of the internet economy. Twenty-six words — "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider" — have shielded every website hosting user content from lawsuits over what their users post [1]. Now, a bipartisan coalition of senators wants to sunset that protection entirely, setting a two-year countdown to repeal unless Congress enacts replacement legislation.

The Sunset Section 230 Act (S. 3546), introduced by Senators Dick Durbin (D-IL) and Lindsey Graham (R-SC), would eliminate Section 230 immunity on January 1, 2027 [2]. The bill has gathered an unusually broad roster of co-sponsors: Chuck Grassley (R-IA), Sheldon Whitehouse (D-RI), Josh Hawley (R-MO), Amy Klobuchar (D-MN), Marsha Blackburn (R-TN), Richard Blumenthal (D-CT), Ashley Moody (R-FL), and Peter Welch (D-VT) [2]. The sponsors frame the two-year sunset not as a final answer but as a forcing mechanism — a way to compel Big Tech to negotiate real accountability reforms under the threat of losing legal protection altogether [3].

A Graveyard of Prior Attempts

Congress has been here before. Lawmakers have introduced roughly 80 bills targeting Section 230 since 2020 — 23 in 2020 alone, with additional waves in each subsequent Congress [4][5]. Ten more proposals have appeared in the first months of the current 119th Congress [4].

Section 230 Reform Bills Introduced in Congress
Source: Lawfare / Slate Section 230 Reform Tracker
Data as of May 1, 2026CSV

The passage rate tells its own story. Of all those proposals, only one has meaningfully altered Section 230: the 2018 SESTA/FOSTA legislation, which carved out an exception for sex trafficking-related content [5]. No other Section 230 reform bill has been enacted into law. The EARN IT Act, first introduced in 2020, is now on its fourth version without passage [4]. The SAFE TECH Act has been introduced and reintroduced across multiple sessions [4].

This track record suggests that despite bipartisan rhetorical agreement that "something must be done," the coalition fractures when specifics emerge. Conservatives who want platforms to stop removing right-leaning content and progressives who want platforms to remove more harmful content have fundamentally incompatible visions for reform — a dynamic that has stalled every previous effort [5].

What the Bill Actually Does — and Doesn't Do

The Sunset Section 230 Act takes a blunt approach compared to its predecessors. Rather than targeting specific categories of content — child exploitation material (EARN IT), paid advertising fraud (SAFE TECH), or algorithmic amplification — it proposes to eliminate the entire liability shield [2][3].

This distinguishes it sharply from more surgical proposals. The EARN IT Act would have removed Section 230 protections specifically for child sexual abuse material cases [4]. The SAFE TECH Act would have created exceptions for paid content and targeted harassment [4]. A Harvard Ash Center proposal has argued for a middle path: maintaining Section 230 protection for direct human speech while stripping it for algorithmic amplification of content — the theory being that a platform's recommendation algorithm is "a product, not a public service" and should be regulable as such [6].

Under the Sunset Section 230 Act, if no replacement legislation is enacted within two years, companies would face lawsuits both for hosting third-party content and for amplifying it. The bill does not distinguish between the two categories of liability [2].

The Courtroom Catalyst

Two jury verdicts in March 2026 have added urgency to the legislative push. On March 24, a New Mexico state court jury found Meta liable under the state's Unfair Practices Act for failing to protect children from sexual predators, awarding approximately $375 million in civil penalties [7]. The next day, a Los Angeles County jury found Meta and Google negligent in app design, awarding $6 million in damages for addiction-related harm to minors [8].

The California verdict was particularly significant because jurors found platforms liable for their own design choices — features like infinite scroll, autoplay, and algorithmic recommendation — rather than for third-party user content [8]. This distinction matters because it sidestepped Section 230 entirely. Plaintiffs' lawyers argued that Instagram and YouTube functioned as a "digital casino," and the jury agreed [8].

Meta has asked the California judge to overturn the verdict [9]. The company has also signaled it will invoke Section 230 to challenge both rulings — a legal strategy that has directly prompted the Senate Judiciary Committee to schedule a hearing on May 13, 2026 [7].

These verdicts may influence roughly 2,000 other pending lawsuits against social media companies [8].

Who Gets Caught in the Crossfire

Sponsors frame the legislation as targeting trillion-dollar companies, but the text applies to every "interactive computer service" — the same statutory language that covers Facebook, but also Wikipedia, Reddit, Yelp, GitHub, Craigslist, and a forum for antique Buick collectors with 38,000 users [10][11].

The Electronic Frontier Foundation has argued that the bill's biggest beneficiaries would not be harmed users but large tech companies themselves. "The biggest beneficiaries [of Section 230] are small platforms and users," the EFF wrote in March 2025, noting that major tech companies have even endorsed bills that would gut the law — precisely because they can absorb the compliance and litigation costs that would destroy smaller competitors [11].

The startup advocacy group Engine has estimated that without Section 230, new platforms would face "inundation with costly litigation," with even frivolous lawsuits costing thousands of dollars to defend — expenses that could be fatal for small operations [11]. The Cato Institute has argued that repeal "would only serve to further consolidate power in the hands of existing leaders in the field," allowing incumbents to erect barriers that prevent challengers from emerging [10].

Wikipedia presents a specific case study. The Wikimedia Foundation has filed amicus briefs in Section 230 cases arguing that the encyclopedia, which relies on volunteer editors to create and moderate content, could not exist in its current form without immunity protections [12].

No bill sponsor has publicly released an analysis of how many small platforms, nonprofits, or open-source projects would face new legal exposure under a full sunset [10][11].

The Lobbying War

The tech industry has mobilized significant resources to fight Section 230 reform. In the first quarter of 2026 alone, Meta spent $7.26 million on lobbying, Amazon spent $4.44 million, Google spent $2.94 million, and TikTok spent $1.13 million [7]. Airbnb's Q1 2026 filing explicitly listed the Sunset Section 230 Act (S. 3546) as a lobbied bill [7].

Tech Industry Lobbying Spending (Q1 2026)
Source: Legis1 / OpenSecrets
Data as of Apr 1, 2026CSV

On the reform side, the Center for Countering Digital Hate has lobbied in favor of the sunset provision [7]. Internet Works, a coalition representing smaller platforms, spent $50,000 in Q1 2026 lobbying against changes to Section 230 [7].

The bill's co-sponsors span both parties and various donor relationships with the tech industry. Senators Klobuchar and Blumenthal have been among the most active tech critics in the Senate, while Graham and Grassley lead from the Judiciary Committee's Republican side [2][3].

The Empirical Gap

One of the least examined aspects of the debate is how thin the empirical evidence actually is. Research shows that approximately one in five U.S. adults has experienced online harassment [13]. Online hate and extremism have grown through viral sharing and algorithmic targeting over the past 25 years [13]. But the causal question — whether Section 230 immunity itself produced these harms, versus algorithmic design choices, advertising incentive structures, or other factors — remains largely unanswered.

The Information Technology and Innovation Foundation conducted a review of common critiques and found that while documented harms exist, causal attribution to Section 230 specifically "remains unproven" [14]. The National Academies of Sciences convened a workshop examining whether legal revisions to Section 230 could limit misinformation and abuse, but the proceedings highlighted the complexity of the relationship rather than establishing clear causal links [13].

Brookings Institution researchers have noted that recommendation engine design is "problematic, especially for high-prevalence harms like misinformation," but cautioned that "the wider research picture is much cloudier than many advocates are keen to admit" [15]. The question of whether the 2016 U.S. presidential election was meaningfully influenced by disinformation — one of the driving narratives behind Section 230 reform — "remains a matter of scholarly debate" [13].

This evidence gap creates a policy problem: lawmakers are proposing to eliminate a legal framework without clear evidence that the framework itself — as opposed to the business models built on top of it — is the primary cause of the harms they seek to address.

How Other Democracies Handle It

Three major democracies have adopted alternative approaches to platform liability, and their experiences offer partial data points.

The European Union's Digital Services Act (DSA) maintains a conditional immunity framework rather than eliminating protections outright. Platforms are not required to make independent judgments about content illegality; instead, they must respond to "notices" submitted by users or authorities with "reasoned justifications" for removal [16]. The DSA creates tiered obligations based on company size, with the largest platforms ("Very Large Online Platforms") facing the strictest transparency and risk-assessment requirements [16]. Smaller companies face lower compliance thresholds — a distinction absent from the Sunset Section 230 Act [16].

The United Kingdom's Online Safety Act takes a more aggressive approach, requiring platforms to actively monitor for illegal content using algorithms or product features when technically feasible [16]. The UK law does not exempt smaller companies from its compliance framework based on size [16]. Civil liberties organizations have raised concerns about the Act's effective general monitoring requirement [16].

Australia's Online Safety Act empowers the eSafety Commissioner to issue removal notices requiring takedowns within 24 hours, with penalties of up to A$825,000 for non-compliant corporations [17]. The Australian approach is largely notice-and-action based rather than requiring proactive monitoring, and it applies to any online service with Australian users regardless of where the company is located [17].

Each model has produced tradeoffs. The EU's size-tiered approach has been praised for protecting smaller platforms but criticized for moving slowly on enforcement. The UK's proactive monitoring mandate has drawn objections from free-speech advocates. Australia's eSafety Commissioner model has created fast response times but raised questions about governmental content removal authority [16][17].

The Free Speech Problem No One Wants to Talk About

The most underweighted argument in the current debate may be the First Amendment countercase. Legal scholars have identified a core paradox: stripping immunity would not necessarily produce better moderation. It could produce worse outcomes for lawful speech.

Without Section 230 protections, platforms facing potential lawsuits for every piece of user content would have strong incentives to over-moderate — removing legal but controversial speech preemptively to reduce litigation risk [10][18]. Research has shown that this dynamic disproportionately affects content from communities of color, women, LGBTQ+ communities, and religious minorities, whose speech is typically the first to be flagged, removed, or demonetized under aggressive moderation policies [18].

The EFF has argued that eliminating Section 230 would create a perverse incentive structure: platforms would face liability for content they moderate (under traditional publishing liability doctrines) but not for content they leave untouched, producing "the exact opposite of their goal of protecting children and adults from harmful content online" [11].

The Harvard Law Review has analyzed Section 230 as effectively functioning as a First Amendment rule, noting that the statute addresses the same concerns about intermediary liability that the First Amendment would otherwise need to resolve [18]. Even if Section 230 were revised, "serious constitutional problems would remain with respect to holding social media platforms liable, either civilly or criminally, for third-party user content" [18].

The Cato Institute's Matthew Feeney has written that repealing Section 230 "would limit Americans' speech" by forcing platforms to adopt "stringent content moderation policies, resulting in a homogenized online space where controversial or dissenting voices are silenced" [10].

The Litigation Roadmap

If the Sunset Section 230 Act passes and no replacement is enacted by January 1, 2027, the litigation landscape would shift rapidly. The roughly 2,000 pending social media lawsuits would be the first to test the new environment [8]. Plaintiffs' attorneys in the Meta and Google cases have already demonstrated viable legal theories for product-liability and negligence claims [8].

The federal circuits most likely to see early cases include the Ninth Circuit (home to most major tech companies) and the Third Circuit (which has heard significant Section 230 cases historically). Based on precedent from other major statutory changes, district court rulings would likely emerge within 12 to 18 months, appellate decisions within two to three years, and a Supreme Court challenge resolving the constitutional questions within four to six years [19].

The Supreme Court has already shown reluctance to wade into Section 230 territory. In Gonzalez v. Google (2023), the justices were asked whether Section 230 protects platforms when algorithms recommend third-party content. They declined to answer, issuing a narrow ruling that left the fundamental questions unresolved [19]. A full repeal would force the constitutional question the Court has so far avoided: whether the First Amendment independently requires some form of intermediary liability protection, with or without Section 230.

What Comes Next

The Senate Judiciary Committee hearing on May 13, 2026, will be the first formal congressional response to the March jury verdicts [7]. Whether the Sunset Section 230 Act advances beyond that hearing depends on whether the bipartisan coalition can hold together when forced to specify what a replacement framework would actually look like — the same question that has killed every previous reform effort.

The bill's sunset mechanism is both its greatest political strength and its greatest policy risk. As a forcing function, it creates urgency. As a policy outcome, a full repeal with no replacement would expose every website hosting user content to the kind of liability that existed before 1996 — an environment that, as legal scholars have noted, the modern internet was specifically designed to escape [1][18].

Sources (19)

  1. [1]
    Section 230: An Overviewcongress.gov

    Congressional Research Service overview of Section 230 of the Communications Decency Act, its history, and the 26 words that provide immunity to interactive computer services.

  2. [2]
    Durbin, Graham Introduce Bill To Sunset Section 230 Immunity For Tech Companiesdurbin.senate.gov

    Press release announcing the Sunset Section 230 Act (S. 3546), which would repeal Section 230 two years after enactment.

  3. [3]
    Graham Leads Bill to Sunset Section 230 Immunity, Protect Americans Onlinelgraham.senate.gov

    Senator Graham's announcement of the bipartisan bill to sunset Section 230 in two years, framing it as a forcing mechanism for reform.

  4. [4]
    What Has Congress Been Doing on Section 230?lawfaremedia.org

    Lawfare analysis tracking Section 230 reform proposals, noting ten proposals in the first months of the 119th Congress and categorizing reform approaches.

  5. [5]
    All the Ways Congress Wants to Change Section 230slate.com

    Slate and Future Tense legislative tracker documenting all Section 230 reform bills introduced since 2020.

  6. [6]
    Sunset and Renew: Section 230 Should Protect Human Speech, Not Algorithmic Viralityash.harvard.edu

    Harvard Ash Center proposal to maintain Section 230 for direct human speech while removing it for algorithmic amplification of content.

  7. [7]
    Meta Verdicts Push Congress to Rethink Section 230legis1.com

    Analysis of March 2026 jury verdicts against Meta and Google, lobbying spending data, and the Senate Judiciary Committee's scheduled response.

  8. [8]
    Jury finds Meta and Google negligent in social media harms trialnpr.org

    NPR reporting on the Los Angeles jury verdict finding Meta liable for $4.2 million and Google for $1.8 million in the youth addiction case.

  9. [9]
    Meta asks California judge to overturn social media verdictthedailyrecord.com

    Reporting on Meta's legal motion to overturn the California jury verdict on social media addiction harms.

  10. [10]
    Senate Approach to Section 230 Would Eviscerate the Internet and Online Expressioncato.org

    Cato Institute analysis arguing that sunsetting Section 230 would consolidate market power among large tech firms and harm small platforms and free expression.

  11. [11]
    230 Protects Users, Not Big Techeff.org

    EFF argument that Section 230's biggest beneficiaries are small platforms and individual users, not large tech companies.

  12. [12]
    Don't leave developers behind in the Section 230 debatetechcrunch.com

    Analysis of how Section 230 reform could affect open-source developers, startups, and platforms like GitHub and Wikipedia.

  13. [13]
    Section 230 Protections: Can Legal Revisions or Novel Technologies Limit Online Misinformation and Abuse?nationalacademies.org

    National Academies workshop proceedings examining the evidence base for Section 230 reform and the relationship between liability protections and online harms.

  14. [14]
    Fact-Checking the Critiques of Section 230: What Are the Real Problems?itif.org

    ITIF review finding that while online harms are documented, causal attribution to Section 230 specifically remains unproven.

  15. [15]
    Dual-use regulation: Managing hate and terrorism online before and after Section 230 reformbrookings.edu

    Brookings analysis noting recommendation engines are problematic for misinformation but that the broader research picture is cloudier than advocates suggest.

  16. [16]
    The Global Content Regulation Landscape – Developments in the EU, UK, U.S., and Beyondkslaw.com

    King & Spalding comparison of the EU Digital Services Act, UK Online Safety Act, and U.S. approaches to platform liability regulation.

  17. [17]
    Online Safety Act 2021: What Startups And SMEs Need To Knowsprintlaw.com.au

    Overview of Australia's Online Safety Act obligations for startups and SMEs, including notice-based removal requirements and penalty structures.

  18. [18]
    Section 230 as First Amendment Ruleharvardlawreview.org

    Harvard Law Review analysis examining Section 230 as effectively functioning as a First Amendment rule addressing intermediary liability.

  19. [19]
    Supreme Court Takes Up Section 230 Challengescotusblog.com

    SCOTUSblog coverage of the Supreme Court's decision to hear Gonzalez v. Google and its subsequent narrow ruling leaving Section 230 unchanged.