All revisions

Revision #1

System

about 22 hours ago

Banned on Paper, Booming in Practice: How 'Nudify' Apps Stayed on Google Play and the App Store

In January 2026, researchers at the Tech Transparency Project (TTP) ran a search inside the two mobile app stores that most of the world uses. They typed "nudify" and similar terms into Google Play and the Apple App Store and counted what came back. Fifty-five apps turned up on Google's storefront. Forty-seven turned up on Apple's. Thirty-eight were listed on both. Each app either advertised or delivered the ability to take a photograph of a clothed person — almost always a woman — and produce a synthetic image of her naked [1][2].

By AppMagic's tally, the apps TTP identified had been installed roughly 705 million times and had generated about $117 million in lifetime revenue [1][3]. A narrower April 2026 follow-up, which filtered the list down to apps that actually produced nude or near-nude images on demand, still counted 483 million downloads and more than $122 million in revenue [2][4]. Thirty-one of the apps carried age ratings that deemed them suitable for minors [2][4].

Nudify Apps Identified by Store — January 2026
Source: Tech Transparency Project (Jan 2026 report)
Data as of Jan 27, 2026CSV

The numbers do not describe a loophole. They describe a market — one operating inside stores whose written rules forbid it.

What the policies actually say

Google Play's Developer Program Policy prohibits "depictions of sexual nudity" and bans apps that "degrade or objectify people, such as apps that claim to undress people or see through clothing, even if labeled as prank or entertainment apps" [1][5]. Apple's App Review Guidelines direct developers to exclude "overtly sexual or pornographic material" [1][5]. Google's AI-Generated Content policy, updated in January 2025, adds a layer on top: any generative app must actively prevent its model from producing sexual, exploitative, or deceptive content, must ship with an in-app flagging tool, and is subject to "automated scans, human moderation, and user reporting" before and after listing [6].

On paper, these rules cover nudify apps unambiguously. In practice, TTP's January sample included apps that had been live on the stores long enough to cross seven-figure install thresholds. DreamFace, published by a Redwood City, California entity called New Port LLC, had cleared 10 million Google Play installs and was rated "Ages 13+" on Google Play and "9+" on the App Store [1]. Collart had passed 7 million downloads and was rated for all ages on Google Play [1]. RemakeFace — a product of an Indonesian developer called Dirgasena — had more than 5.5 million Google downloads and was listed without an age gate there, while Apple rated it 17+ [1]. AI Dress Up – Try Clothes Design, from the Polish publisher Bizo Mobile, had also passed the 10 million mark and was rated suitable for children [1].

After TTP and CNBC showed Apple and Google the January findings, Apple removed 28 apps and Google removed 31 [1][3]. When TTP ran the same search three months later, it identified 18 functional nudify apps still on the App Store and 20 on Google Play, plus ads and autocomplete suggestions inside both stores that steered users toward them [2][4]. That second round of outreach produced only 15 removals from Apple and seven from Google [2][7].

Apps Removed After TTP Contacted Stores
Source: Tech Transparency Project
Data as of Apr 15, 2026CSV

Who publishes these apps, and who pays for them

Nudify apps are not a cottage industry of anonymous developers. The most heavily advertised line of products is traceable to a small cluster of entities in Hong Kong and mainland China. Joy Timeline HK Limited, the publisher behind the CrushAI family of apps, is registered to a Chinese national, Zhang Xiao, who holds 90 of the company's 100 shares [8]. A sister entity, Soul Friendship HK Limited, is registered to Zhang Shiwei — the minority shareholder in Joy Timeline — with the ownership ratio reversed [8]. The two companies share a Wan Chai service-provider address and together operate more than 158 domains and at least 28 "nudify" apps beyond CrushAI's flagship, Crushmate [8]. A third linked entity, Wuhan Ruisen Zhuoxin Network Technology Co., filed the U.S. federal trademark for Crushmate in August 2024 and obtained it in April 2025 [8].

Other named publishers include MINDSPARK AI LIMITED of Dublin, developer of DeepSwap; 360 Company LLC of Istanbul; Swapify Inc., which lists a San Francisco address; Runtopia Technology Co., the Chengdu-based publisher of Pixnova; and Sichuan Shanghu Network Technology Co., developer of Adult AI Chat [9]. The commercial model, where it has been documented, combines in-app subscriptions with heavy paid marketing. Crushmate charges $9.99 a week for a "VIP" tier; a Stripe dashboard disclosed in court filings showed 4,563 successful payments totaling at least $45,584 to that single subscription plan by August 2024 [8]. Apple and Google take up to 30% of in-app purchases on their stores [1].

Advertising, not the app stores themselves, appears to be the primary acquisition channel. Meta sued Joy Timeline in Hong Kong in June 2025 after the company placed more than 87,000 ads on Facebook and Instagram in the first weeks of 2025 — 90% of CrushAI's inbound traffic, by Meta's accounting — using at least 170 business accounts to evade takedowns [9][10][11]. Meta is seeking to claw back $289,200 it says it spent investigating and removing the ads [9].

Victims, mostly minors, mostly girls

The clearest data on who is targeted by these tools comes from the Center for Democracy & Technology's 2024 report "In Deep Trouble," which surveyed U.S. public high school students, parents, and teachers. Fifteen percent of students said they were aware of at least one sexually explicit deepfake depicting someone associated with their school during the prior school year — roughly 2.3 million students out of 15.3 million enrolled [12]. Both victims and perpetrators in the cases surveyed were overwhelmingly students themselves, and female and LGBTQ+ students reported less confidence in their schools' ability to respond [12].

A joint investigation by WIRED and the research group Indicator identified nearly 90 schools and 600 individual student victims worldwide targeted with AI-generated nudes over the same period [13]. Women and girls are estimated to make up about 90% of the people depicted in nonconsensual intimate imagery created with these tools [13]. On November 2, 2023, students at a New Jersey high school used a nudify app to generate sexualized images of more than 30 female classmates — one of the first U.S. incidents to reach law enforcement [13]. The Boston Globe reported in April 2026 on a middle-school case in Massachusetts in which a boy produced a fake nude of a female classmate and, despite a criminal referral, faced no adjudicated consequences [14].

Traffic data collected by the social-network-analysis firm Graphika shows the scale of the broader nudify economy outside the app stores. In September 2023, 24 million unique visitors used undressing websites; one operator alone recorded more than 5 million monthly visits [3][13]. The CDT survey found that very few schools had written policies on NCII — authentic or synthetic — and that available responses were concentrated on punishing student distributors rather than supporting victims [12].

Research Publications on 'Deepfake Nonconsensual'
Source: OpenAlex
Data as of Apr 16, 2026CSV

Academic attention has lagged the harm: OpenAlex indexes 514 peer-reviewed papers using the phrase "deepfake nonconsensual" through April 2026, with output climbing from three papers in 2018 to a peak of 160 in 2025 [15].

A patchwork of laws

The legal instruments covering AI-generated intimate imagery are newer than the apps themselves, and they differ on a basic question: whether creation — not just distribution — is a crime.

In the United States, the TAKE IT DOWN Act was signed by President Donald Trump on May 19, 2025. The statute criminalizes knowingly publishing or threatening to publish nonconsensual intimate imagery, including AI-generated material, and requires "covered platforms" to remove flagged content within 48 hours of a valid notice [16][17]. The DEFIANCE Act, which would create a federal civil cause of action letting survivors sue those who produce or distribute sexual deepfakes of them, cleared the Senate in 2024, failed in the House, and was reintroduced by Representative Alexandria Ocasio-Cortez after TAKE IT DOWN became law [18][19].

In the United Kingdom, sharing non-consensual intimate images has been illegal since 2015, and the Online Safety Act 2023 extended that prohibition to deepfake intimate content [20]. In January 2026, Technology Secretary Liz Kendall announced that creation of nonconsensual intimate images — whether or not the image leaves the creator's device — will become a distinct criminal offense, and said the government would bring forward amendments to the Crime and Policing Bill to ban AI "nudification" tools outright [20][21].

Australia's Criminal Code Amendment (Deepfake Sexual Material) Act 2024 took effect on September 3, 2024. It criminalizes the non-consensual transmission of sexualized deepfakes — maximum six years' imprisonment, with an aggravated offense for the person who created the material — but notably does not criminalize the creation itself unless a carriage service is used to transmit it [22].

The European Union's AI Act imposes transparency and labeling requirements on deepfakes, including machine-readable marking [23]. Several EU member states have passed or are debating national laws directly criminalizing synthetic intimate imagery. Across the four jurisdictions reviewed here, the common gap is that creation-only offenses — which would reach a teenager who makes a nude of a classmate but never shares it — remain rare. The U.K.'s pending reform is among the first proposals to close that gap explicitly [21].

The case for and against app-store removal

The most substantive objection to app-store takedowns as a remedy is that they do not reach the underlying technology. Open-source image-generation models such as Stable Diffusion, along with community-maintained inpainting tools, can produce the same outputs as a paid nudify app; websites using those models operate on ordinary web hosts and are reachable from any browser [24][13]. Graphika's traffic figures support the view that demand is not constrained by app-store availability: tens of millions of users already rely on web-based tools [3][13]. Critics of sweeping platform bans also point out that image-editing capabilities — including inpainting of clothing — have legitimate uses in fashion, art, and medical imaging.

Platform accountability researchers make the counter case on two grounds. First, app stores are not content-neutral pipes; they operate recommendation, search, and advertising systems. TTP's April 2026 report found that Apple and Google autocomplete and paid search results actively surfaced nudify apps to users who began typing related queries, meaning the stores were functionally marketing the apps they had elsewhere promised to block [2][4][7]. Second, mobile distribution lowers the friction that keeps casual users off more technical web tools. A nudify site requires a credit card, a browser, and some search literacy; a one-tap install rated "9+" requires none of those. The CDT findings on school incidents suggest that much of the observed harm is driven by teenagers using mainstream app-store products rather than bespoke model deployments [12].

Meta, which runs neither app store but depends on them, has taken the position that advertising is the load-bearing distribution channel and that litigation against developers can meaningfully degrade supply. Its June 2025 Hong Kong suit against Joy Timeline is the first high-profile test of that theory [10][11].

What advocates are asking for

Child-safety organizations, NCII survivor groups, and academic researchers have converged on a short list of demands. The first is consistent, proactive removal of nudify apps and of the ads, autocomplete entries, and recommendation slots that funnel users to them — a request directed at Apple, Google, and Meta [2][7]. The second is a federal creation offense in the United States that would complement the TAKE IT DOWN Act's distribution-focused language; that gap is the motivation for re-passage of the DEFIANCE Act and for companion state statutes [18][19]. The third is narrower statutory language addressing minors as both victims and perpetrators, since many existing deepfake laws were drafted for adult revenge-porn scenarios and do not map onto schoolyard incidents [12][13][14]. The fourth is funding and training for schools to respond to NCII cases, which CDT researchers describe as the binding constraint on student-level remediation [12].

The timeline for any of these measures is uncertain. The U.K. creation-offense proposal is tied to the Crime and Policing Bill's passage, which the government has not scheduled for final reading. The DEFIANCE Act's revival depends on U.S. House floor time that it did not receive in 2024. The EU AI Act's deepfake-transparency provisions phase in over 2026 and 2027. Platform-level changes, as the January-to-April gap in TTP's removal data shows, have so far proceeded one investigative report at a time.

Sources (24)

  1. [1]
    Nudify Apps Widely Available in Apple and Google App Storestechtransparencyproject.org

    TTP's January 2026 investigation identifying 55 nudify apps on Google Play and 47 on the Apple App Store, with 705M downloads and $117M revenue.

  2. [2]
    Apple and Google Are Steering Users to Nudify Appstechtransparencyproject.org

    April 2026 TTP follow-up showing app stores' own search and ad systems surface nudify apps; 483M downloads and $122M revenue; 31 apps rated for minors.

  3. [3]
    Deepfake porn apps downloaded 705 million times on Apple, Google storesupi.com

    Coverage of the TTP report and aggregate download and revenue numbers.

  4. [4]
    Apple, Google under fire as report finds nudify apps with 483 million downloadsstartupnews.fyi

    Summary of the April 2026 TTP follow-up report.

  5. [5]
    Developer Program Policysupport.google.com

    Google Play policy language including prohibitions on sexual nudity and apps claiming to undress people.

  6. [6]
    Understanding Google Play's AI-Generated Content policysupport.google.com

    Google Play's AI-content policy requiring moderation, flagging tools, and automated scans.

  7. [7]
    App Store search suggestions reportedly steered users to 'nudify' apps9to5mac.com

    Coverage of TTP's April report on autocomplete and ad steering, including removal counts.

  8. [8]
    Companies Linked to CrushAI 'Nudify' Appsbellingcat.com

    Bellingcat investigation of Joy Timeline HK, Soul Friendship HK, and Wuhan Ruisen corporate structure and subscription revenue.

  9. [9]
    Meta sues Hong Kong firm over AI app making non-consensual explicit imagesscmp.com

    Coverage of Meta's June 2025 suit against Joy Timeline including the $289,200 claim and 87,000 ads figure.

  10. [10]
    Combating Nudify Apps with Lawsuit & New Technologyabout.fb.com

    Meta's own announcement of the CrushAI lawsuit and enforcement measures.

  11. [11]
    Meta files lawsuit against developer of CrushAI 'nudify' appcnbc.com

    CNBC report on the CrushAI lawsuit filed in Hong Kong.

  12. [12]
    In Deep Trouble: Surfacing Tech-Powered Sexual Harassment in K-12 Schoolscdt.org

    CDT survey finding 15% of US high school students aware of deepfake NCII at their school, ~2.3M students affected.

  13. [13]
    Minors Are On the Frontlines of the Sexual Deepfake Epidemictechpolicy.press

    Analysis of WIRED/Indicator findings on 90 schools and 600 student victims, and 90% female victim statistic.

  14. [14]
    He made a fake nude of his middle school classmate. Nothing happened.bostonglobe.com

    Case study from a Massachusetts middle school illustrating the gap between existing law and teenage perpetrators.

  15. [15]
    OpenAlex — Deepfake Nonconsensual Publication Trendopenalex.org

    Academic publication trend on deepfake nonconsensual imagery, 2018–2026.

  16. [16]
    TAKE IT DOWN Acten.wikipedia.org

    Overview of the TAKE IT DOWN Act, signed May 19, 2025, requiring 48-hour platform takedowns.

  17. [17]
    The TAKE IT DOWN Act: A Federal Law Prohibiting the Nonconsensual Publication of Intimate Imagescongress.gov

    CRS overview of the federal TAKE IT DOWN Act's scope and enforcement mechanisms.

  18. [18]
    Ocasio-Cortez, Lee, Durbin, Graham Introduce Bipartisan Legislation to Combat Deepfake Imageryocasio-cortez.house.gov

    Official announcement of the DEFIANCE Act, which would give survivors a federal civil cause of action.

  19. [19]
    Fabricated Images, Real Harm: The DEFIANCE Act and Federal Civil Remediesuclawreview.org

    University of Cincinnati Law Review analysis of DEFIANCE Act provisions and remedies.

  20. [20]
    UK Tightens Laws on AI Generated Sexual Deepfakesdpp-law.com

    Analysis of UK legal framework on sexual deepfakes, including Online Safety Act 2023 provisions.

  21. [21]
    Liz Kendall details government's plans to tackle intimate AI deepfakescommittees.parliament.uk

    UK Technology Secretary announcing criminalization of creation of intimate AI images and Crime and Policing Bill amendments.

  22. [22]
    The Criminal Code Amendment (Deepfake Sexual Material) Act 2024codea.com.au

    Analysis of Australia's federal deepfake criminal statute, effective September 3, 2024.

  23. [23]
    The ongoing efforts across Europe to combat explicit deepfakes, CSAMiapp.org

    IAPP overview of EU AI Act transparency requirements and European national legislation on deepfakes.

  24. [24]
    The ecosystem of nonconsensual intimate deepfake tools onlineisdglobal.org

    Institute for Strategic Dialogue review of open-source and commercial nudify tool ecosystem.