All revisions

Revision #1

System

23 days ago

Grammarly's AI "Expert Review" Debacle: How a Writing Tool Sparked a Class Action Over Identity Theft

On March 11, 2026, investigative journalist Julia Angwin — founder of The Markup — filed a class action lawsuit in the Southern District of New York against Superhuman Platform Inc., the parent company of Grammarly [1]. The suit targets Grammarly's now-defunct "Expert Review" feature, which for months had been generating AI writing feedback attributed to real journalists, authors, and academics — none of whom had ever given their consent. The case has rapidly become a flashpoint in the escalating legal battle over AI companies' unauthorized use of human identity, raising fundamental questions about who owns your name, your voice, and your professional reputation in the age of generative AI.

The Feature That Started It All

In August 2025, Grammarly — by then in the process of rebranding under the Superhuman umbrella following its acquisitions of email client Superhuman and workspace platform Coda — quietly launched a feature called "Expert Review" [2]. Available to Pro subscribers paying $12 per month, the tool promised to "take your writing to the next level — with insights from leading professionals, authors, and subject-matter experts" [3].

The premise was straightforward: users could select from a dropdown menu of notable figures, and the AI would analyze their writing through the stylistic lens of that chosen expert. Want feedback in the voice of Stephen King? Neil deGrasse Tyson? Kara Swisher? Grammarly would generate it. The feature even included deceased figures like Carl Sagan and, controversially, British historian David Abulafia, who died in January 2026 — just months after his name was added to the system [4].

But there was a critical problem hiding behind a fine-print disclaimer: "References to experts in Expert Review are for informational purposes only and do not indicate any affiliation with Grammarly or endorsement by those individuals or entities" [5]. In other words, the "insights" were not actually from leading professionals or any human person at all. They were AI-generated text, loosely inspired by publicly available information, with a famous name attached.

The Expose That Broke It Open

The controversy erupted on March 4, 2026, when WIRED exposed the full scope of the persona list embedded in Expert Review [6]. The revelation showed that the feature wasn't just borrowing the cachet of literary titans and celebrity scientists. It had also conscripted an extraordinary number of working journalists and editors from major publications.

On March 7, TechCrunch's Anthony Ha published a deep dive confirming the breadth of the list [7]. Then The Verge made a discovery that turned outrage into a media firestorm: their own editorial staff had been included. Editor-in-chief Nilay Patel, senior editors David Pierce, Sean Hollister, and Tom Warren — all were generating AI "feedback" under their names within the Grammarly product [8]. None had any knowledge the feature existed.

The list extended far beyond The Verge. It included former Verge editors Casey Newton and Joanna Stern, WIRED's Lauren Goode, Bloomberg's Mark Gurman and Jason Schreier, The New York Times' Kashmir Hill, The Atlantic's Kaitlyn Tiffany, PC Gamer's Wes Fenlon, and Gizmodo's Raymond Wong, among many others [8].

Author and editor Benjamin Dreyer tested the feature with deliberately meaningless Lorem Ipsum placeholder text and still received writing tips attributed to Stephen King — exposing the system as generating generic AI advice rather than anything meaningfully connected to the named expert's actual craft [9].

A Week of Revolt

What followed was a rapid, public unraveling.

Casey Newton, the influential tech journalist behind Platformer, wrote a blistering account titled "Grammarly turned me into an AI editor against my will and I hate it," describing the surreal experience of discovering that an AI version of himself was dispensing writing advice he never wrote. "I just assumed that someone would tell me when it happened," Newton observed dryly [10].

Kara Swisher's response was characteristically blunt: "You rapacious information and identity thieves better get ready for me to go full McConaughey on you. Also, you suck" [9].

The backlash wasn't just about individual indignation. It crystallized a deeper anxiety among writers and knowledge workers: if an AI company can commercially exploit your professional identity — attaching your name and reputation to machine-generated content — without asking, what guardrails actually exist?

When The Verge contacted Grammarly during this period, the company initially offered an "opt-out" mechanism — experts who objected could request removal from the dropdown [1]. Critics immediately noted the absurdity of requiring individuals to discover and then actively object to unauthorized use of their identity, particularly for deceased individuals who could never opt out.

The Lawsuit

Julia Angwin's class action, filed on March 11, rests on state laws protecting individuals' right of publicity — the well-established legal principle that individuals own their identity and can control how it is used commercially [1]. The complaint alleges that Superhuman violated privacy and publicity rights by using the identities of Angwin and dozens of other journalists, writers, and academics to power AI-generated suggestions for profit without consent or compensation.

The lawsuit does not specify individual damages but estimates that aggregate claims from the plaintiff class exceed $5 million [11]. Angwin's attorneys have characterized it as "a pretty straightforward case" of unauthorized commercial appropriation of professional identities [1].

The legal theory is bolstered by an increasingly robust body of law on right of publicity in the AI context. California's AB 2602, which took effect on January 1, 2025, prohibits contracts that permit the creation of digital replicas of a person's voice or likeness without specific consent [12]. At the federal level, the NO FAKES Act, introduced by Senator Chris Coons, would establish a nationwide "digital replication right" and impose liability on anyone who creates or distributes digital replicas without consent [13]. While neither law was drafted with a feature precisely like Expert Review in mind, the underlying principle — that you cannot commercially exploit someone's identity without permission — maps directly onto Angwin's claims.

The case also arrives against the backdrop of an accelerating wave of AI identity litigation. Voice actors have sued text-to-speech companies. The estate of George Carlin brought suit over an AI-generated comedy special mimicking his voice [14]. Golf legend Jack Nicklaus won a 2025 ruling in New York affirming his control over his name, image, and likeness rights [15]. Each case chips away at any notion that AI companies can freely appropriate human identity for commercial gain.

Global Media Coverage of "Grammarly" — 30-Day Trend
Source: GDELT Project
Data as of Mar 12, 2026CSV

The Retreat

On the same day the lawsuit landed, Superhuman CEO Shishir Mehrotra posted a public apology on LinkedIn and announced the feature would be disabled [16].

"Over the past week, we received valid critical feedback from experts who are concerned that the agent misrepresented their voices," Mehrotra wrote. "This kind of scrutiny improves our products, and we take it seriously." He added: "We hear the feedback and recognize we fell short on this. I want to apologize and acknowledge that we'll rethink our approach going forward" [16].

Mehrotra framed the original intent as wanting to "help users discover influential perspectives and scholarship relevant to their work, while also providing meaningful ways for experts to build deeper relationships with their fans" [5]. But the gap between that aspiration and the reality — where famous names were slapped onto generic AI output without any involvement from the actual humans — proved impossible to defend.

The company stated it would "reimagine the feature to make it more useful for users, while giving experts real control over how they want to be represented — or not represented at all" [16].

A Company in Transition

The Expert Review debacle arrives at a particularly sensitive moment for the company. Grammarly, which boasts over 40 million daily users and annual revenue exceeding $700 million, has been undergoing a dramatic transformation [17]. In July 2025, Grammarly acquired the email client Superhuman and workspace platform Coda. By October 2025, the company had rebranded entirely under the Superhuman name, appointing former Coda CEO Shishir Mehrotra as its leader [18].

The rebrand represented an ambitious pivot from a trusted writing tool to a full-spectrum AI productivity platform. Superhuman Go, the company's new AI assistant, was designed to work across more than 100 applications, with agents that could brainstorm, send emails, and schedule meetings [18]. Expert Review was meant to showcase the company's vision of AI-powered expertise at scale.

Instead, it showcased the risks of moving fast and treating human identity as just another input for an algorithm.

The Broader Reckoning

The Grammarly case illustrates a pattern that has become disturbingly familiar in the AI industry: build first, ask permission never. The Federal Trade Commission has already brought at least a dozen "AI-washing" cases targeting companies that overstate what their AI does or mislead consumers [19]. The Bartz v. Anthropic copyright case resulted in a $1.5 billion settlement over unauthorized use of copyrighted material for AI training [19]. Class action litigation targeting AI companies grew substantially throughout 2025, spanning copyright, employment, securities fraud, and privacy claims [20].

What makes the Grammarly case distinctive is the directness of the offense. This wasn't a dispute over whether training data scraped from the internet constitutes fair use — the dominant legal question in most AI copyright litigation. This was a company taking specific, identifiable people's names and professional reputations and attaching them to a commercial product as a selling point, all while a small-print disclaimer tried to disavow the very association the feature was designed to create.

The contradiction at the heart of Expert Review — prominent name attribution paired with a disclaimer denying any actual affiliation — may prove to be its most legally damaging feature. A product that relies on the commercial value of someone's identity while simultaneously disclaiming any connection to that person invites precisely the kind of legal scrutiny Angwin's suit delivers.

What Comes Next

The class action is in its earliest stages, filed in one of the most plaintiff-friendly jurisdictions in the country. Superhuman has disabled the feature but has not settled, and the legal process will likely stretch over months or years. The case could establish significant precedent for how right-of-publicity law applies to AI-generated content that invokes real people's identities — a question that dozens of pending cases are circling but that no court has yet definitively answered.

For the broader AI industry, the lesson is increasingly hard to ignore: the era of treating human identity, expertise, and creative output as freely available raw material for AI products is drawing to a close. Between strengthening state laws, pending federal legislation, and an increasingly aggressive plaintiffs' bar, the legal infrastructure to enforce that boundary is being built in real time.

The irony, of course, is that a company built on helping people write better failed to read the room.

Sources (20)

  1. [1]
    Grammarly ditches 'Expert Review' after expert rebellion and class action suitavclub.com

    Julia Angwin filed a class-action lawsuit against Superhuman alleging the company violated privacy and publicity rights by using her identity for AI features without consent.

  2. [2]
    Grammarly rebrands to 'Superhuman,' launches a new AI assistanttechcrunch.com

    Grammarly changed its company name to Superhuman, uniting Grammarly, Coda, and Superhuman Mail under one brand as an AI-native productivity platform.

  3. [3]
    Expert Review | Writing Feedback by Subject-Matter Expertsgrammarly.com

    Expert Review analyzes your work and draws from relevant subject-matter expertise to provide targeted feedback from leading professionals, authors, and experts.

  4. [4]
    Grammarly's 'expert review' feature sparks outragecybernews.com

    The feature included deceased scholars such as British historian David Abulafia, who died in January 2026, months after being added to the system.

  5. [5]
    Grammarly has disabled its tool offering generative-AI feedback credited to real writersengadget.com

    Grammarly included a disclaimer stating expert references 'do not indicate any affiliation with Grammarly or endorsement by those individuals.'

  6. [6]
    A lot of journalism folks are offering editing advice as Grammarly's AI 'experts'niemanlab.org

    WIRED exposed the persona list on March 4, 2026, revealing the full scope of journalists and writers included in the Expert Review feature.

  7. [7]
    Grammarly's 'expert review' is just missing the actual expertstechcrunch.com

    Anthony Ha's deep dive confirmed the breadth of the expert persona list used in Grammarly's controversial feature.

  8. [8]
    Grammarly's AI writing tips claim inspiration from experts who never agreed to participatethe-decoder.com

    The Verge found Grammarly generating comments tied to real newsroom staffers including Nilay Patel, Sean Hollister, Tom Warren, and David Pierce.

  9. [9]
    Grammarly Is Pulling Down Its Explosively Controversial Feature That Impersonates Writers Without Their Permissionfuturism.com

    Benjamin Dreyer tested the feature with Lorem Ipsum and still received tips attributed to Stephen King. Kara Swisher called the company 'rapacious information and identity thieves.'

  10. [10]
    Grammarly turned me into an AI editor against my will and I hate itplatformer.news

    Casey Newton described the surreal experience of discovering an AI version of himself was dispensing writing advice without his knowledge.

  11. [11]
    Grammarly Hit with Class Action Lawsuit Over AI 'Expert Review' Featurefilmogaz.com

    The lawsuit does not specify individual damages but estimates aggregate claims from the plaintiff class exceed $5 million.

  12. [12]
    California Enacts a Suite of New AI and Digital Replica Lawsmanatt.com

    California AB 2602, effective January 1, 2025, prohibits contracts that permit creation of digital replicas of a person without specific consent.

  13. [13]
    California Passes Digital Replica Legislation as Congress Considers Federal Approachinsideglobaltech.com

    The federal NO FAKES Act would establish a nationwide 'digital replication right' and impose liability for creating digital replicas without consent.

  14. [14]
    AI Copyright Lawsuit Developments in 2025: A Year in Reviewcopyrightalliance.org

    The George Carlin estate filed suit over an AI-generated comedy special; voice actors sued text-to-speech companies over unauthorized use of their voices.

  15. [15]
    The Year in AI Law: 2025's Biggest Legal Cases and What They Mean for 2026internetlawyer-blog.com

    Jack Nicklaus won a 2025 ruling in New York affirming his control over his name, image, and likeness rights.

  16. [16]
    Grammarly Disables AI 'Expert Review' After Backlash From Authors and Journalistsdecrypt.co

    CEO Shishir Mehrotra wrote: 'We fell short on this' and stated the company would 'rethink our approach going forward.'

  17. [17]
    Grammarly acquires email startup Superhuman to expand beyond grammar and dominate AI work toolstechstartups.com

    Grammarly boasts over 40 million daily users and annual revenue exceeding $700 million; it acquired Superhuman to build an AI productivity platform.

  18. [18]
    Grammarly transforms into AI-enabled productivity suite with Superhuman rebrandsiliconangle.com

    The new Superhuman suite includes Grammarly writing tools, Coda workspace, Mail inbox, and Superhuman Go AI assistant working across 100+ applications.

  19. [19]
    Up Next in Privacy Litigation: Class Actions Begin to Target Consumer-Facing Companies Using Generative AI Toolshklaw.com

    The FTC has brought at least a dozen 'AI-washing' cases. The Bartz v. Anthropic case resulted in a $1.5 billion settlement.

  20. [20]
    AI Litigation Trends 2025: How to Protect Your Business from Emerging Legal Risksjimersonfirm.com

    In 2025, class action lawsuits targeting AI grew substantially, spanning copyright, employment, securities fraud, and privacy claims.