Revision #1
System
about 4 hours ago
Starmer Summons Big Tech to Downing Street — But Can the UK Actually Force Platforms to Protect Children?
On April 15, 2026, Prime Minister Keir Starmer summoned senior executives from Meta, TikTok, Google, Snap, and X to 10 Downing Street for a meeting on children's online safety [1]. "Social media shapes how children see themselves, their friendships and the world around them," Starmer told the assembled tech leaders. "When that comes with real risks, looking the other way is not an option" [1].
The summit, attended alongside Technology Secretary Liz Kendall, was framed as a warning shot: the government is consulting on whether to impose an Australia-style outright ban on social media for under-16s, curb addictive design features, and impose stricter controls on AI chatbots [2]. The consultation closes on May 26, with legislation expected as early as autumn 2026 [3].
But the meeting also drew criticism. Ellen Roome, whose son died attempting an online challenge, dismissed the summit as a "stunt," arguing that meetings without legislative action do not protect children [1].
The Scale of the Problem
The data on children's exposure to online harm in the UK is extensive, though — as the NSPCC notes — incomplete, because not all incidents are reported to authorities [4].
Over 9,000 child sexual abuse offences involved an online element in 2023/24 [4]. Online grooming cases have increased by 70% since 2019, with perpetrators increasingly using gaming platforms and social media to make initial contact, according to the National Crime Agency [5]. The NCA has identified cases in which girls as young as 11 have been coerced into seriously harming themselves or siblings [5].
Ofcom's own research indicates that 62% of children aged 13–17 report encountering online harm over a four-week period, with many treating it as an "unavoidable" part of their online lives [6]. Meanwhile, 72% of children aged 8–12 are accessing platforms despite minimum age policies of 13 [6]. Children with Special Educational Needs are disproportionately affected: 27% regularly view sites promoting self-harm, compared to 17% of their peers [7].
Comparative pre-smartphone baseline data is harder to pin down. The NSPCC's statistics briefing acknowledges that the changing nature of how harm is reported — and the emergence of entirely new categories of harm like sextortion and algorithm-amplified self-harm content — makes direct historical comparison difficult [4]. What is clear is that the volume and variety of threats children face online has expanded substantially since 2015.
As Ofcom's 2025 media literacy report shows, social media use among even the youngest children is accelerating: 37% of 3-to-5-year-olds now use social media, up from 29% the prior year, and 97% of 13-to-17-year-olds are active users [8].
What the Online Safety Act Actually Requires — and Why Enforcement Has Been Slow
The Online Safety Act received Royal Assent in October 2023, but its child-safety provisions did not take effect immediately [9]. The Act required Ofcom to first develop and consult on detailed Codes of Practice before platforms had binding obligations. That process took until July 25, 2025, when the Protection of Children Codes of Practice came into force [10].
Since that date, platforms that are likely to be accessed by children must take steps to prevent young users from encountering "primary priority" content — including pornography and content promoting suicide or self-harm — and reduce exposure to other "priority" harms [10]. Platforms that fail face fines of up to £18 million or 10% of qualifying global revenue, whichever is higher [11].
But enforcement has been incremental. A Taylor Wessing analysis predicted that while 2025 saw enforcement only for "egregious breaches," regulators would cast the net wider in 2026 [12]. That prediction appears to be materializing. On March 12, 2026, Ofcom and the Information Commissioner's Office jointly wrote to Facebook, Instagram, TikTok, Snapchat, YouTube, and Roblox, setting out four demands and giving the companies until April 30, 2026 to explain what they intend to do [13]. The demands centered on four areas: enforcing minimum age rules with effective age checks, tackling grooming, making algorithmic feeds safer for children, and rigorously testing products for child safety [6].
Ofcom plans to publish a Register of Categorised Services around July 2026, which will sort platforms by risk level and impose additional duties on the largest "Category 1" providers, including user empowerment features and identity verification requirements [10].
Algorithmic Harm vs. Peer Contact
A recurring question in the regulatory debate is whether algorithmic recommendation systems or peer-to-peer contact poses the greater threat to children online.
Ofcom's evidence points to algorithms as the primary vector. In a 2024 statement, the regulator said recommender systems are "a key pathway for children to encounter harmful content, including suicide, self-harm and eating disorder content, violent content, and pornographic content" [14]. Algorithms narrow the type of content presented to users over time, leading to "increasingly harmful content recommendations" and "cumulative harm through repeated exposure" [14].
At the same time, peer-to-peer contact remains a significant channel for grooming. Ofcom has called for "failsafe grooming protections" — strict controls to prevent strangers from contacting children they do not know [6]. The NSPCC reports that 19% of children aged 10–15 exchanged messages with someone online they had never met before, and that 70% of online grooming and bullying occurs inside private or encrypted messaging apps including WhatsApp, Snapchat, and Discord [4][5].
The evidence suggests these two vectors compound each other: algorithmic feeds surface content that normalizes harmful behavior, while private messaging enables direct exploitation.
How the UK Compares to Other Jurisdictions
The UK is not acting in isolation. Several major jurisdictions are pursuing child online safety legislation simultaneously, though their approaches differ substantially.
Australia enacted the most restrictive approach: a hard ban on social media for under-16s, effective December 2025, with fines of up to AUD $50 million for non-compliant platforms [15]. The ban requires platforms to take "reasonable steps" to prevent underage account creation. However, critics have questioned whether enforcement is technically feasible without intrusive identity verification [16].
The EU's Digital Services Act takes a design-based approach, requiring platforms to assess and mitigate systemic risks to minors and prohibiting targeted advertising to children. Maximum penalties reach 6% of global revenue. Enforcement has intensified in 2026, with a particular focus on age assurance [12].
The US Kids Online Safety Act (KOSA) has had a more uneven trajectory. It passed the Senate 91–3 in 2024 but stalled in the House [17]. It was reintroduced in May 2025 in the 119th Congress, with Senate and House versions now diverging: the Senate version would create a "duty of care" covering eating disorders, depression, and compulsive use patterns, while the House Republican version stripped out what critics call KOSA's "sharpest legal edge" [17][18]. Unlike the UK and EU frameworks, the US proposals currently include no revenue-based penalty mechanism.
The UK's Online Safety Act sits between Australia's blunt access ban and the EU's systemic-risk framework. Its 10% revenue penalty is the steepest among the four jurisdictions, giving Ofcom substantial theoretical leverage — provided it follows through on enforcement.
The Case Against Heavy-Handed Intervention
Civil liberties groups and some child-safety researchers have raised substantive objections to government-mandated age verification and strict content controls.
The Electronic Frontier Foundation's Molly Buckley has warned that age verification risks tying users' "most sensitive and immutable data — names, faces, birthdays, home addresses — to their online activity," creating concentrated data troves that are attractive targets for hackers and government demands [19]. A New America report found that as of its publication, "strict age verification — confirming a user's age without requiring additional personal identifiable information — is not technically feasible in a manner that respects users' rights, privacy, and security" [20].
The ACLU has argued that children's online safety legislation raises "immense First Amendment concerns," including "chilling effects on social media posts" and high compliance costs [21]. A CNBC investigation found that age-verification laws designed to protect minors in the US are already "pulling millions of adult Americans into mandatory age-verification gates to access online content" [19].
More directly relevant to child safety, critics argue that banning or restricting access for at-risk teenagers may push them to less-regulated corners of the internet. The UK government's own consultation document acknowledged "risks of unintended consequences, including potential displacement to less-regulated platforms" [3]. Oxford University researchers have questioned whether an under-16 ban is "the right course," noting that the relationship between social media use and mental health is more complex than popular narratives suggest [22].
The UK consultation itself leaves key definitional questions unresolved: whether AI chatbots, gaming sites, and messaging applications fall within the scope of a potential ban remains unclear [3].
Do Platform Reforms Actually Work? Instagram's Teen Accounts as a Test Case
Instagram's Teen Accounts — automatically applied to all users under 18 worldwide from June 2025 — offer the closest thing to a real-world experiment in platform-level child safety reform [23].
The protections include a default "13+ content setting" that limits exposure to violent, sexualized, or sensitive content in recommendation-driven areas like Explore and Reels, along with parental controls and restricted direct messaging from strangers [23]. In April 2026, Instagram expanded the system internationally with movie-style content ratings [24].
However, independent audits have raised questions about effectiveness. A September 2025 report found that 64% of teen safety tools were "ineffective, defunct, or easily bypassed," with 13 out of 24 features receiving red ratings indicating significant flaws [25]. Five audited Teen Accounts were all exposed to harmful content in their feeds and recommendations, including posts glorifying eating disorders and promoting self-harm [25]. Sixty percent of teens aged 13–15 reported seeing harmful content, and nearly 60% said they had received unwanted messages, often from adults [25].
Meta has defended its approach, saying it uses "AI to detect users' age based on their activity, and facial age estimation technology," and arguing that "the most effective way to complement our own age assurance approach is to verify age centrally at the app store level" [11]. This position — pushing age verification responsibility to app stores rather than individual platforms — has been a consistent industry stance.
Lobbying and Industry Influence
Precise UK lobbying spending by social media companies on child-safety regulation is difficult to quantify. The UK's lobbying transparency framework relies on reporting requirements rather than the detailed disclosure mandated in the US [26].
US disclosures provide some indication of scale: in 2024, Alphabet, ByteDance, Meta, Microsoft, Snap, and X combined spent $61.5 million on lobbying — a 13% increase over 2023 [27]. A portion of this spending was directed at influencing child-safety legislation, though US filings do not disaggregate spending by specific regulatory issue.
In the UK context, one observable industry success was Parliament's repeated rejection of Lords amendments that would have imposed immediate age restrictions on under-16s — amendments that were defeated before the government launched its broader consultation [1]. Whether industry lobbying played a role in those defeats is not documented in public filings.
Elon Musk has gone further than other tech executives, publicly backing a UK campaign to repeal the Online Safety Act entirely [28]. Ofcom has also opened an investigation into Musk's Grok AI over "deeply concerning" deepfake content [29].
The Growing Academic Focus
Academic interest in children's online safety has expanded rapidly. According to OpenAlex data, over 508,000 research papers have been published on the topic, peaking at 72,776 in 2024 [30]. The volume of research reflects both the scale of the policy challenge and the degree of scientific uncertainty that still surrounds key causal questions — particularly whether social media use directly causes mental health harm or correlates with other risk factors.
What Happens Next
The immediate timeline is set. Platforms have until April 30 to respond to the Ofcom/ICO joint demands [13]. The government consultation closes May 26 [3]. Ofcom will publish its Register of Categorised Services around July 2026, imposing additional duties on the largest platforms [10]. The government has indicated that new legislation could follow in autumn 2026 [3].
The gap between political rhetoric and regulatory enforcement remains the central tension. The Online Safety Act gives Ofcom the statutory power to levy fines of up to 10% of global revenue — for Meta, that would amount to roughly $16 billion based on 2025 revenue [11]. Whether Ofcom will use that power, or whether the threat alone will produce meaningful changes in how platforms treat child users, is the question that the Downing Street summit left unanswered.
Sources (30)
- [1]'Take responsibility': UK berates social media firmsgoulburnpost.com.au
British PM Keir Starmer called on social media companies to 'step up and take responsibility' over children's online safety, ahead of a meeting with executives from Meta, Snap, Google, TikTok and X on April 15, 2026.
- [2]Beyond the Online Safety Act: UK Proposes Sweeping Social Media Ban and Feature Restrictionscooley.com
UK government published consultation on March 2, 2026 covering potential under-16 social media ban, feature restrictions, and AI chatbot controls, closing May 26, 2026.
- [3]UK Government Launches Consultation on Children's Online Experiencesinsideprivacy.com
The consultation acknowledges risks of unintended consequences including displacement to less-regulated platforms, and leaves key definitional questions unresolved regarding AI chatbots and messaging apps.
- [4]Online harm and abuse: statistics briefinglearning.nspcc.org.uk
Over 9,000 child sexual abuse offences involved an online element in 2023/24. 19% of children aged 10-15 exchanged messages with someone online they had never met before.
- [5]Sadistic online harm groups putting people at unprecedented risk, warns the NCAnationalcrimeagency.gov.uk
Online grooming cases have increased by 70% since 2019. The NCA identified cases in which girls as young as 11 have been coerced into seriously harming themselves.
- [6]Keep underage children off your platforms, Ofcom tells tech firmsofcom.org.uk
Ofcom demands platforms enforce minimum age rules, tackle grooming, make feeds safer, and test products rigorously. 62% of children aged 13-17 report encountering online harm over a four-week period.
- [7]From cyberbullying to self-harm: how online risks could be anticipated in UK's vulnerable kidsinternetmatters.org
27% of children with Special Educational Needs often view sites promoting self-harm compared to 17% of peers. Children in care are almost twice as likely to be cyberbullied.
- [8]Children and Parents: Media Use and Attitudes Report 2025ofcom.org.uk
Social media use among 3-5s increased from 29% to 37%. 81% of 10-12 year olds use at least one social media app. 97% of 13-17 year olds are active social media users.
- [9]Online Safety Act 2023en.wikipedia.org
The Online Safety Act 2023 received Royal Assent on 26 October 2023, creating a new regulatory framework for online safety in the UK.
- [10]The Online Safety Act Enters Phase 2mayerbrown.com
From 25 July 2025, Ofcom's Protection of Children Codes of Practice came into effect, requiring services to implement measures against harmful content.
- [11]Social media giants urged to tighten child safety after UK rejects blanket ban for teenscnbc.com
Meta said it uses AI to detect age and facial estimation technology. Platforms face fines up to 10% of global revenue. Combined US lobbying spending by major tech companies reached $61.5 million in 2024.
- [12]Online safety in 2026: enhancement and enforcement in the EU and UKtaylorwessing.com
Enforcement actions expected to intensify throughout 2026. The EU DSA enforcement will focus heavily on age assurance and age verification.
- [13]UK Regulators Tell Social Media Platforms 'Prove You're Protecting Children'wired-parents.com
On March 12, 2026, Ofcom and the ICO jointly wrote to six platforms including Facebook, Instagram, TikTok, Snapchat, YouTube, and Roblox demanding they prove they are protecting children.
- [14]Tech firms must tame toxic algorithms to protect children onlineofcom.org.uk
Recommender systems are a key pathway for children to encounter harmful content. Algorithms lead to increasingly harmful content recommendations and cumulative harm through repeated exposure.
- [15]Online Safety Amendment (Social Media Minimum Age) Act 2024 - Australiaen.wikipedia.org
Australia enacted a hard under-16 social media ban with fines up to AUD $50 million, effective December 2025.
- [16]Children's Privacy in 2026: From Australia's Under-16 Social Media Ban to US Shiftsdatamatters.sidley.com
Global shift away from notice-and-consent frameworks toward access restrictions, design mandates, and age-assurance mechanisms for children's online safety.
- [17]Kids Online Safety Act - Wikipediaen.wikipedia.org
KOSA passed the Senate 91-3 in 2024 but stalled in the House. Reintroduced in May 2025 in the 119th Congress.
- [18]Wave of Federal 'Online Safety' Legislation Hits Congressdwt.com
House Republicans revised KOSA, stripping out the duty-of-care provision that gave the bill its sharpest legal edge, while the Senate version retains broader protections.
- [19]Online age-verification tools spread across U.S. for child safety, but adults are being surveilledcnbc.com
Age verification risks tying users' most sensitive data to their online activity. EFF's Molly Buckley warns of concentrated data creating attractive hacking targets.
- [20]Challenges with Age Verification - New Americanewamerica.org
Strict age verification confirming a user's age without requiring additional PII is not technically feasible in a manner that respects users' rights, privacy, and security.
- [21]Lawmakers Renew Push to Regulate Kids' Speech Online Despite Speech Protectionsaclu.org
Social media age-verification laws raise immense First Amendment concerns including data privacy issues and chilling effects on social media posts.
- [22]Expert Comment: Is an under-16 social media ban the right course?ox.ac.uk
Oxford researchers question whether banning under-16s is the right approach, noting the relationship between social media and mental health is more complex than popular narratives suggest.
- [23]Teen Accounts: Protections for Teens, Peace of Mind for Parentsabout.instagram.com
Instagram Teen Accounts automatically apply protections to all under-18 users, including content restrictions and parental controls, rolled out globally June 2025.
- [24]Instagram expands its movie-inspired content restrictions for teens internationallytechcrunch.com
In April 2026, Instagram expanded 13+ content ratings and Limited Content settings internationally, building on rollout in UK, US, Australia, and Canada.
- [25]Instagram Teen Safety Failures: Case Study on Platform Risks for Youthblog.snoopreport.com
64% of teen safety tools were ineffective, defunct, or easily bypassed. All five audited Teen Accounts were exposed to harmful content. 60% of teens aged 13-15 reported seeing harmful content.
- [26]Transparency & donations: A comparative series – UKhoganlovells.com
UK lobbying transparency relies on reporting requirements rather than stringent limits, with transparency as a key theme.
- [27]Social media companies spend record sums on lobbying in 2024issueone.org
Alphabet, ByteDance, Meta, Microsoft, Snap, and X combined spent $61.5 million on US lobbying in 2024, a 13% increase over 2023.
- [28]Elon Musk Backs UK Campaign to Repeal Online Safety Actcrowdbyte.ai
Elon Musk publicly backed a UK campaign to repeal the Online Safety Act.
- [29]UK to investigate Elon Musk's Grok over 'deeply concerning' deepfakesaljazeera.com
Ofcom opened an investigation into Musk's Grok AI over deeply concerning deepfake content.
- [30]OpenAlex: Research Publications on Children Online Safetyopenalex.org
Over 508,000 research papers published on children's online safety, peaking at 72,776 in 2024.