UK PM Starmer Warns Tech Executives That Current Online Safety Approach Is Unsustainable
TL;DR
UK Prime Minister Keir Starmer summoned executives from Meta, X, TikTok, Google, and Snap to Downing Street on April 16, 2026, warning that the current approach to online safety is "unsustainable" and threatening measures including a potential ban on under-16s from social media. The confrontation comes as AI-generated child sexual abuse material has surged from 7 reports in 2022 to over 426 in 2025, while Ofcom's enforcement of the Online Safety Act has so far produced only modest fines totalling roughly £1.4 million — a fraction of what the EU has levied under its Digital Services Act.
On April 16, 2026, Prime Minister Keir Starmer and Technology Secretary Liz Kendall sat across the table from representatives of five of the world's largest social media companies and delivered a blunt message: "Things can't go on like this" .
The Downing Street summit — attended by Google UK managing director Kate Alessi, Meta public policy principal Markus Reinisch, X global government affairs director Wifredo Fernandez, TikTok northern Europe public policy director Alistair Law, and Snap Europe president Ronan Harris — was framed as a direct confrontation over child safety online . No CEOs were present. The meeting produced no binding commitments and no signed agreements. What it did produce was the clearest signal yet that the UK government considers its own landmark Online Safety Act insufficient — barely two years after it became law.
The Scale of the Problem
The numbers behind Starmer's frustration are stark. Reports of AI-generated child sexual abuse material (CSAM) to the Internet Watch Foundation rose from just 7 in 2022 to 51 in 2023, 245 in 2024, and 426 in the first ten months of 2025 alone . Total AI-generated CSAM cases identified in 2025 exceeded 8,000, with AI-generated videos surging 260-fold year-on-year . Category A material — the most severe, involving penetration, bestiality, or sadism — rose from 2,621 to 3,086 items, now accounting for 56% of all illegal material compared to 41% the previous year . Girls represent 94% of victims depicted in illegal AI-generated content . AI-generated depictions of infants aged 0–2 jumped from 5 to 92 .
Beyond CSAM, the broader landscape of online harm is extensive. Amnesty International UK polling of 3,024 UK respondents aged 16–25 found that 73% of Gen Z social media users have witnessed misogynistic content online, with 50% encountering it weekly . Among Gen Z women who experienced online misogyny, 44% received unsolicited explicit images, 43% experienced body-shaming, and 32% faced hate speech . Women from ethnic minority backgrounds who experienced misogyny were more likely to face hate speech than their white counterparts — 38% versus 31% .
Home Office statistics recorded 84,374 racially or religiously aggravated crimes in England and Wales in the year ending March 2025, with 23% of victims identifying as Black and 33% as Asian . The Online Safety Act was supposed to address the online dimension of these harms, but enforcement to date has not matched the scale of the problem.
What Ofcom Has Actually Done
Since the Online Safety Act came into force, Ofcom has launched 5 enforcement programmes and opened 21 investigations as of October 2025, later expanding to 18 investigations covering 83 pornography sites alone . The regulator requested and received 104 risk assessment records under its enforcement programmes .
But the fines levied so far have been modest. A file-sharing service was fined £20,000 for failing to respond to information requests. AVS, a nudification site, received a £50,000 fine. Kick Online Entertainment SA was fined £800,000 for failing to comply with age check requirements. 4chan was fined £520,000 for non-compliance . The total: roughly £1.39 million.
The Online Safety Act permits fines of up to £18 million or 10% of qualifying worldwide revenue, whichever is greater . For Meta, 10% of worldwide revenue would exceed $13 billion; for Google, over $30 billion . The actual fines imposed so far do not approach anything close to these maximums. No major platform — none of the companies represented at the Downing Street meeting — has been fined.
By comparison, the European Commission fined X (formerly Twitter) €120 million in December 2025 for breaching transparency obligations under the EU's Digital Services Act . The DSA permits fines of up to 6% of global annual turnover . While the OSA's theoretical 10% cap is higher, the EU has moved faster to impose significant penalties on household-name platforms.
Who Was in the Room — And What They Promised
The executives summoned to Downing Street were mid-level policy and government affairs officials, not C-suite decision-makers . Meta sent a public policy principal, not Mark Zuckerberg. X sent a government affairs director, not owner Elon Musk. Google, TikTok, and Snap sent regional directors .
Starmer told the assembled executives: "I want to know that when my kids pick up their phones, they're not being exposed to things that can harm them, or glued to their screens by addictive design" . He stated that a ban on children accessing platforms would be "preferable to a world where harm is the price of participation" .
The meeting generated no legally binding obligations. It was, in regulatory terms, a political event. The government's "Growing Up In The Online World" consultation, which has received over 45,000 responses including nearly 6,000 from young people, closes on May 26, 2026 . Options under consideration include an Australia-style social media ban for under-16s, curfews, app time limits, and restrictions on addictive design features .
Ellen Roome, founder of the campaign group Jools Law and a bereaved mother, dismissed the meeting as a "stunt," criticising the government for telling MPs to vote against raising the minimum age for social media access while publicly claiming commitment to child protection .
The Compliance Cost Debate
Industry groups argue that Online Safety Act compliance is prohibitively expensive. PwC estimates the Act affects roughly 100,000 companies worldwide . Companies must navigate over 3,000 pages of regulatory guidance, conduct legal risk assessments, and implement age verification systems costing approximately $1.50 per user . Ofcom's own budget for online safety work reached £92 million in FY 2025/26, up from £71 million the previous year — costs ultimately passed to regulated service providers .
The impact on smaller operators has been tangible. Microcosm, a non-profit forum platform, announced it would shut down in March 2025 rather than bear compliance costs . Some smaller sites and forums have blocked UK users entirely rather than invest in compliance infrastructure .
But this picture requires context. The £92 million Ofcom budget, spread across all regulated platforms, represents a fraction of the operating costs of major tech companies. Meta reported $164 billion in revenue in its most recent fiscal year. Google parent Alphabet reported over $340 billion. The ITIF, a US-based technology policy think tank, has argued that the Act's 34-million-monthly-user threshold for enhanced duties captures leading US platforms while exempting smaller competitors, creating an unequal playing field . Whether that is a design flaw or a feature — protecting smaller operators from disproportionate burdens while holding dominant platforms accountable — depends on one's perspective.
Independent economists have not produced a comprehensive, publicly available analysis proving that compliance costs are genuinely prohibitive for major platforms relative to their operating profits. The costs are real for small and mid-sized services but represent a rounding error for the companies whose executives sat in Downing Street.
The Encryption Standoff
If Starmer moves toward a stricter regime, the most technically contentious measure would be mandatory scanning of encrypted messages. The Online Safety Act already contains provisions allowing Ofcom to require "accredited technology" to detect CSAM in private messages — a power that, if exercised, would require some form of client-side scanning .
The Open Rights Group, coordinating with European Digital Rights (EDRi) and over 80 civil society organisations from 23 countries, has warned that this would make the UK "the first liberal democracy to require the routine scanning of people's private chat messages" . The technology, known as client-side scanning, would operate by analysing message content on users' devices before encryption is applied.
Cryptographers and security researchers have consistently argued that client-side scanning fundamentally undermines end-to-end encryption. If a device scans messages before encrypting them, the encryption no longer protects user privacy from the scanning system itself. Any backdoor or scanning mechanism creates a vulnerability that could be exploited by hostile actors . High Court challenges led by civil liberties groups are focusing on compatibility with the Human Rights Act .
Starmer has stated his commitment to privacy, but his government has confirmed plans to grant Ofcom powers to scan encrypted chats . Ofcom was expected to issue its report on the technical feasibility of such scanning by April 2026 . This tension — between the government's stated commitment to privacy and its pursuit of tools that privacy advocates say would destroy it — remains unresolved.
How the UK Compares Internationally
The UK's Online Safety Act sits within a global wave of platform regulation, but it differs from peer frameworks in significant ways.
EU Digital Services Act: The DSA takes a broader approach, covering not just illegal content but also dark patterns, illegal goods, and intellectual property issues . Enforcement is shared between national Digital Services Coordinators and the European Commission, which has direct authority over platforms with more than 45 million monthly EU users . The DSA's penalty cap is 6% of global turnover — lower than the OSA's 10% — but the EU has already imposed a €120 million fine on X, while Ofcom's largest fine is £800,000 . The DSA emphasises reactive notice-and-takedown procedures, while the OSA places greater weight on proactive monitoring obligations .
Australia: The eSafety Commissioner enforces mandatory takedown obligations and basic safety expectations. Australia passed the Online Safety Amendment (Social Media Minimum Age) Act 2024, which took effect in December 2025, requiring platforms to take "reasonable steps" to prevent under-16s from having accounts . By mid-January 2026, over 4.7 million accounts judged to belong to individuals under 16 had been deactivated, removed, or restricted . The eSafety Commissioner has expressed "significant concerns" about compliance by Facebook, Instagram, Snapchat, TikTok, and YouTube . Penalties can reach AUD $49.5 million for systemic non-compliance .
Germany: Germany operates a dual system combining federal and state-level protections. The Federal Youth Protection Act (JuSchG), reformed in 2021, requires online platforms to implement age ratings and precautionary measures . Germany was an early mover with its NetzDG law in 2017, which required platforms to remove "manifestly unlawful" content within 24 hours. The NetzDG has since been largely superseded by the DSA but established a model for rapid takedown requirements.
No single jurisdiction has produced definitive evidence of large, measurable reductions in harmful content without provoking significant free-speech litigation. Australia's age-ban approach has generated the most dramatic compliance numbers (4.7 million accounts removed) but faces questions about effectiveness given the ease of circumvention. The EU has levied the largest financial penalties but is still in the early stages of measuring outcomes. The UK has the most detailed regulatory framework on paper but the weakest enforcement record to date.
Is the Online Safety Act Actually Failing?
Starmer's "unsustainable" framing raises a question: is the Act genuinely failing, or is this political positioning ahead of a regulatory tightening?
The evidence suggests both. On the enforcement side, there are documented gaps. Ofcom's initial codes of practice omitted specific measures to mitigate risks to children from livestreaming, location information sharing, and ephemeral messaging . Dating industry analysis revealed major compliance gaps ahead of the April 7, 2026 CSEA reporting deadline, suggesting an industry that "has known the deadline was coming for over two years and has still not adequately prepared" . Downloads of VPN services by UK users surged as some sought to circumvent age verification requirements, and users successfully bypassed photo-based age verification using images from video games .
At the same time, the Act is barely operational. Phase 2 children's safety codes only came into force in early 2026 . Major platforms have yet to face substantive enforcement action. The question of whether the Act has "failed" may be premature — it has not yet been fully tested.
The political dimension is also clear. The White House has warned Starmer to "stop threatening American tech companies' free speech," with Trump administration officials monitoring the OSA with "great interest and concern" . US diplomats from the Bureau of Democracy, Human Rights and Labour travelled to London in March 2026 to "affirm the importance of freedom of expression" and challenged Ofcom directly . During his meeting with Starmer in Scotland, President Trump warned the Prime Minister not to censor Truth Social . The US lobbying group NetChoice called the OSA an "ID-for-Speech law" being used to "censor and silence lawful expression and political dissent" .
This transatlantic pressure creates a political incentive for Starmer to reframe the issue as one of child protection — where public support is overwhelming — rather than content moderation more broadly, where his government faces pushback from Washington and Silicon Valley alike.
Who Gets Protected — And Who Doesn't
The changes Starmer is signalling would most directly address harms to children: CSAM exposure, addictive design features, and contact risks from strangers. An under-16 ban, if implemented, would remove the youngest users from platforms entirely — following Australia's approach.
But several categories of harm would remain largely unaddressed. Image-based sexual abuse of adult women — including deepfake pornography, which has exploded alongside generative AI — falls outside the scope of age-based restrictions. The 44% of Gen Z women who report negative mental health impacts from online misogyny are mostly over 16 . Hate speech targeting ethnic minorities, which disproportionately affects Black (23% of hate crime victims) and Asian (33%) communities, is addressed by the OSA's illegal content duties but has not been a focus of enforcement action .
The OSA's "legal but harmful" provisions for adults were removed during the Act's parliamentary passage, meaning content that is harmful but not illegal — including much misogynistic abuse — falls into a regulatory gap for users over 18 . A stricter regime focused on children's access does nothing to close that gap.
What Comes Next
The government's consultation closes on May 26. The options range from incremental — tighter enforcement of existing OSA provisions, higher fines, faster timelines — to structural, including the under-16 ban, mandatory algorithmic audits, and the politically explosive question of encrypted message scanning.
Academic research on online safety regulation has grown dramatically, with publications rising from under 9,000 in 2011 to over 125,000 in 2025 . The policy infrastructure exists. The technical knowledge exists. What remains contested is political will, the tolerance of trade-offs between privacy and safety, and whether the UK is prepared to follow through on enforcement that matches the severity of its rhetoric.
Starmer's meeting with tech executives was, by his own framing, a warning shot. The question is whether the next phase brings enforcement with teeth — or another round of consultations, summits, and warnings that leave platforms free to treat UK regulation as a cost of doing business rather than a constraint on how they operate.
Related Stories
UK Government Summons Social Media Executives to Discuss Children's Online Safety
Elon Musk Backs UK Campaign to Repeal Online Safety Act
UK Launches Voluntary Digital ID System Amid Privacy Concerns
UK Launches Voluntary Digital ID System Amid Privacy Backlash
UK Regulators Probe Just Eat and Autotrader Over Fake Reviews
Sources (26)
- [1]Tech bosses called to Downing Street as PM urges new protections for children on social medialbc.co.uk
Starmer and Kendall questioned executives on what they are doing to protect children. Government consultation has received over 45,000 responses.
- [2]Social media bosses warned 'things can't go on like this' at Downing Street child safety meetingthenationalnews.com
Google UK MD Kate Alessi, Meta's Markus Reinisch, X's Wifredo Fernandez, TikTok's Alistair Law, and Snap's Ronan Harris attended the meeting.
- [3]Online AI-generated child sexual abuse material increased in 2025care.org.uk
IWF reported AI-generated CSAM reached record high in 2025 with over 8,000 cases. Category A content rose to 56% of all illegal material.
- [4]AI-Generated Child Abuse Content Surges 260-Fold, Watchdog Reportsbritbrief.co.uk
AI-generated child abuse videos saw a 260-fold rise. Girls represent 94% of illegal AI-generated content.
- [5]Toxic tech: New polling exposes widespread online misogyny driving Gen Z away from social mediaamnesty.org.uk
73% of Gen Z social media users have witnessed misogynistic content. 44% of Gen Z women report negative mental health impacts.
- [6]Hate crime, England and Wales, year ending March 2025gov.uk
84,374 racially or religiously aggravated crimes recorded. 23% of victims identified as Black, 33% as Asian.
- [7]2025 UK Online Safety Act: Key Milestones and Future Stepscms-lawnow.com
Ofcom launched 5 enforcement programmes and opened 21 investigations by October 2025. Fines issued against file-sharing service, Kick, and 4chan.
- [8]Online Safety industry bulletin - March 2026ofcom.org.uk
Ofcom expanded investigations to 83 pornography sites. Requested 104 risk assessment records under enforcement programmes.
- [9]Enforcing the Online Safety Act: Ofcom fines file-sharing service £20,000ofcom.org.uk
Ofcom fined a file-sharing service £20,000 for not responding to legally binding information requests.
- [10]Online Safety Act: explainergov.uk
Fines of up to £18 million or 10% of qualifying worldwide revenue, whichever is greater.
- [11]The UK's Online Safety Actitif.org
ITIF analysis: the Act's 34-million user threshold captures leading US platforms while exempting smaller competitors. 10% revenue fines create vast financial exposure.
- [12]Commission fines X €120 million under the Digital Services Actec.europa.eu
European Commission fined X €120 million in December 2025 for breaching transparency obligations under the DSA.
- [13]Enforcement and Penalties under the EU Digital Services Actedaa.eu
DSA fines can reach up to 6% of global annual turnover. Periodic penalty payments of up to 5% of average daily worldwide turnover.
- [14]Keir Starmer tells social media firms he is considering a child banupi.com
Starmer warned that a ban on children would be 'preferable to a world where harm is the price of participation.'
- [15]No more meetings and photo ops, Keir Starmer must act now on online safetylbc.co.uk
Ellen Roome criticised the government for telling MPs to vote against raising the age limit while claiming commitment to child protection.
- [16]UK online safety deadlines loom, impacting 100K companies worldwidepwc.com
PwC estimates the Online Safety Act impacts roughly 100,000 companies worldwide. Companies must navigate over 3,000 pages of regulatory guidance.
- [17]Online Safety Act fees and penalties to set by reference to revenuepinsentmasons.com
Ofcom's budget for online safety reached £92 million in FY 2025/26, up from £71 million in FY 2024/25.
- [18]UK's Online Safety Act Forces Small Tech Companies to Block British Usersbiggo.com
Microcosm, a non-profit forum platform, shut down in March 2025. Some smaller sites blocked UK users due to compliance costs.
- [19]Save Encryption - Open Rights Groupopenrightsgroup.org
Open Rights Group coordinated letter with 80+ organisations warning UK could become first liberal democracy to require routine scanning of private messages.
- [20]UK Government Pushes for Mass Scanning of Encrypted Messagesprivacysavvy.com
UK confirmed plans for Ofcom to scan encrypted chats using client-side scanning. High Court challenges focus on Human Rights Act compatibility.
- [21]The Differences Between the Online Safety Act & the Digital Services Acttrustlab.com
OSA emphasises proactive monitoring; DSA uses reactive notice-and-takedown. DSA covers broader topics including dark patterns and illegal goods.
- [22]Australia's Social Media Ban and the eSafety Commissioner's Regulatory Guidanceprivacymatters.dlapiper.com
4.7 million accounts deactivated or restricted by mid-January 2026. eSafety has significant concerns about platform compliance. Penalties up to AUD $49.5 million.
- [23]Implementation of the Online Safety Act - House of Commons Librarycommonslibrary.parliament.uk
Ofcom's initial codes omitted measures for livestreaming, location sharing, and ephemeral messaging risks to children.
- [24]Dating Industry Insights Reveals Major Compliance Gaps as CSEA Deadline Approachesmarketersmedia.com
Industry that has known the deadline was coming for over two years has still not adequately prepared for April 2026 CSEA reporting requirements.
- [25]White House warns Starmer: Stop threatening US tech companies' free speechyahoo.com
Trump administration monitoring OSA with 'great interest and concern.' US diplomats travelled to London in March to challenge Ofcom. Trump warned Starmer not to censor Truth Social.
- [26]OpenAlex: Research publications on online safety regulationopenalex.org
Academic publications on online safety regulation peaked at 125,525 in 2025, up from under 9,000 in 2011.
Sign in to dig deeper into this story
Sign In