Revision #1
System
3 days ago
Australia's Under-16 Social Media Ban Is Failing — and the Regulator Is Done Waiting
Three months after Australia became the first country to ban children under 16 from social media, the national regulator has flagged five major platforms for potential non-compliance and signaled it is preparing enforcement action. The question now is whether stricter penalties can accomplish what the original law has not.
The Ban, and What Went Wrong
On 29 November 2024, the Australian Parliament passed the Online Safety Amendment (Social Media Minimum Age) Act 2024, amending the Online Safety Act 2021 to prohibit anyone under 16 from holding a social media account [1]. The law took effect on 10 December 2025, covering ten platforms: Facebook, Instagram, Snapchat, Threads, TikTok, X, YouTube, Reddit, Kick, and Twitch [2]. Parents cannot consent on behalf of their children — the ban is absolute [3].
The law places the enforcement burden entirely on platforms, not families. Social media companies must take "reasonable steps" to prevent under-16s from creating and retaining accounts [2]. Courts can impose fines of up to 150,000 penalty units — currently A$49.5 million (approximately US$33 million) — for systemic non-compliance [3].
Yet within weeks of taking effect, the cracks were visible. In the first week, platforms suspended approximately 4.7 million accounts globally [4]. Meta removed around 500,000 accounts from Facebook and Instagram [4]. But a survey of 898 Australian parents conducted for the eSafety Commissioner found that roughly seven in ten reported their child still had an account on an age-restricted platform [5].
The Regulator's March 2026 Compliance Report
On 31 March 2026, eSafety Commissioner Julie Inman Grant published a compliance update identifying "significant concerns" about five platforms: Facebook, Instagram, Snapchat, TikTok, and YouTube [6]. The regulator had issued 23 information-gathering notices to all ten age-restricted platforms since December [7].
The report documented several specific failures. Some platforms allowed users unlimited attempts to pass age assurance checks [6]. Others prompted users who had declared themselves under 16 to try again with a fresh age check rather than blocking them [5]. Several platforms used age-assurance measures only after a user attempted to change their age — not at account creation — meaning children who simply declared themselves 16 or older at sign-up faced no verification at all [5].
"As a result, we are now moving into an enforcement stance," Inman Grant said [5]. The Commissioner indicated that eSafety aims to conclude its investigations and decide whether to initiate court action against any platform by mid-2026 [6].
Reddit, Kick, Twitch, X, and Threads were not among the five flagged platforms, though the compliance report covered all ten [7]. The regulator has not publicly disclosed which platforms it considers more compliant or why.
Age Assurance Technology: What Platforms Are Actually Doing
The law is deliberately "tech-neutral," meaning it does not mandate specific age verification methods [8]. Instead, the eSafety Commissioner evaluates whether each platform's approach constitutes "reasonable steps." Platforms are expected to use a "successive validation" or "waterfall" approach — layering multiple methods rather than relying on self-declaration alone [8].
Available methods range from facial age estimation (where software estimates a user's age from a selfie) to government ID verification and behavioral inference, where platforms analyze account activity patterns, device metadata, and financial information to estimate a user's age [9].
The technology has improved. According to ongoing testing by the US National Institute of Standards and Technology (NIST), facial age estimation software — including tools from Yoti, which performs checks for TikTok and Meta — had an average error of 4.1 years in 2014, which dropped to 3.1 years by 2024 and currently sits at 2.5 years [9]. But that margin remains significant at the legal threshold: distinguishing a 15-year-old from a 16-year-old is precisely where these systems perform worst [10].
Facial estimation systems also underperform for people with darker skin tones, women, and those with non-normative facial features due to biased training datasets [10]. Australia's government-commissioned Age Assurance Technology Trial nonetheless concluded that age assurance systems, including biometric age estimation, are "an effective way to protect young people from age-inappropriate content online" [10].
Cost is not the primary barrier. Industry vendors charge well under $1 per age check for automated tools, and as little as single-digit cents at scale [9]. For platforms processing billions of users, the aggregate cost is material but modest relative to the A$49.5 million maximum penalty per violation.
The Evidence Gap: Does Social Media Cause Youth Mental Health Harm?
The ban's political justification rests on the premise that social media causes measurable harm to young people. The evidence supporting this claim is real but contested.
Australian Institute of Health and Welfare data shows that the proportion of 16- to 24-year-olds with a diagnosable mental health condition rose from 26% in 2011-12 to 38.8% in 2020-21, before declining somewhat to 33.7% in 2022-23 [11].
A 2025 study in the Australian Economic Review documented a "substantial worsening in the mental well-being of Australians aged 15–24 years" beginning around 2007-2010, measured through surveys, self-harm hospitalizations, and suicide deaths, with effects worse for young women than young men [12]. The timing aligns with the rise of smartphones and social media adoption.
But correlation is not causation. A 2025 paper in the Australian and New Zealand Journal of Psychiatry concluded there is "inadequate evidence at this time to conclude that the rise in youth mental illness is attributable to social media" [13]. Orygen, a leading Australian youth mental health research institute, released data in 2025 indicating that moderation — rather than elimination — is key to healthy social media use among teens [14].
Academic interest in the question has surged. Over 264,000 research papers have been published on social media and adolescent mental health, with output peaking at 42,144 papers in 2025 [15].
A 2026 national cross-sectional survey found that one-third of young Australians had used social media to seek support for suicidal thoughts or self-harm, and half had used it to support another person experiencing such difficulties [16]. This finding complicates the case for a blanket ban: the same platforms associated with harm are also serving as crisis support infrastructure.
International Comparison: How Similar Laws Have Fared
Australia is not acting in isolation, but no comparable jurisdiction has demonstrated that age-based bans produce verified reductions in underage usage.
France adopted a law in 2023 requiring parental consent for social media users under 15. A representative survey of French families found 68% of parents supported the regulation — but 54% of adolescents reported circumventing it by providing false age information [17].
Norway announced plans to raise the minimum social media age from 13 to 15, with exceptions for children whose parents authorize access [17].
The United Kingdom has taken a different approach through its Age Appropriate Design Code (also known as the Children's Code), which imposes design obligations on platforms rather than banning access outright. The UK is now also considering an under-16 ban, though the proposal remains under consultation [18]. In March 2026, the UK's Information Commissioner's Office issued an open letter to tech firms demanding stronger age checks and better protection of children's data [19].
A 2025 European review of digital child protection approaches concluded that "pure bans or access restrictions fall short," finding that children in age-tiered digital environments with safety mechanisms showed lower risk exposure and higher digital self-efficacy than those subject to blanket prohibitions [17].
The Circumvention Problem
The law's effectiveness hinges on whether children can bypass it — and early evidence suggests many can.
NPR reported in February 2026 that a 15-year-old from Melbourne successfully recovered her suspended Instagram account using Face ID [4]. Teenagers had already inflated their birth years on existing profiles before the ban took effect, grandfathering themselves into continued access.
VPN usage among Australian minors has risen. At the time the ban took effect, two free VPN services reached Australia's top 30 most popular free apps on the Apple App Store, while alternative social media apps Lemon8, Yope, and Coverstar occupied the top three spots [20].
The eSafety Commissioner's guidance states that platforms must attempt to prevent under-16s from using VPNs to circumvent the ban, using techniques such as device fingerprinting, time zone and language metadata analysis, and correlation of SIM card and phone number origin [21]. But the detection methods create a "cat-and-mouse" dynamic: as platforms block specific VPN servers, VPN companies deploy new IP addresses [21].
Louis Hourany, a lecturer in Information Technology at Charles Sturt University, warned of a deeper problem: "When mainstream VPN services are blocked or degraded, users often migrate to lesser-known or free providers that may operate outside Australia's privacy expectations" [22]. Free VPN providers often monetize user data through advertising injection and telemetry harvesting. "Parents often assume VPN equals privacy," Hourany said, "but in some cases, the operator of the VPN may see far more than the social media platform ever could" [22].
Who Gets Hurt: Marginalized Youth and Collateral Damage
Critics argue the ban disproportionately affects young people who depend most on social media for community and support.
One in ten young Australians aged 16–24 identify as queer [23]. Research published in InSight+, the Medical Journal of Australia's analysis platform, found that 59% of queer young Australians had accessed mental health support through social media, 44% joined or followed social media groups specifically for LGBTQ+ communities, and 74% of young trans Australians reported that social media use improved their self-image [23].
More broadly, 73% of young Australians aged 16–25 regularly use social media to search for mental health information, and over 50% experiencing mental health challenges use social media as a substitute for professional support [23].
Amnesty International Australia called the ban "an ineffective quick fix that will not prevent online harms," arguing that the government should regulate platform behavior rather than ban children from access [24]. Monash University researchers described the ban as abandoning "LGBTIQA+ and marginalised youth," noting that social media plays a particularly important role for Indigenous children in remote areas, where it serves as a primary tool for social connection and self-expression [25].
The ban's legislative process drew criticism for speed. Advocacy groups reported that the consultation window was too narrow for meaningful input from affected communities, with some stakeholders given as little as a single day to assess the proposal [25].
The Steelman Case Against Stricter Enforcement
The strongest argument against tighter enforcement is not that children should have unrestricted social media access, but that enforcement itself creates harms the ban was meant to prevent.
Researchers at the University of Queensland argued in January 2025 that "banning social media won't fix Australia's youth mental health crisis," pointing to the lack of causal evidence and the risk that the ban diverts attention from systemic factors like underfunded mental health services [26]. Meta itself has stated that the blanket ban "failed to improve young people's safety and well-being" [4].
The displacement concern is supported by observable behavior. When mainstream platforms become inaccessible, young people migrate to less regulated alternatives. The rapid rise of Lemon8, Yope, and Coverstar in Australian app stores — platforms with less robust content moderation and safety infrastructure — illustrates this pattern [20]. Free VPN adoption exposes minors to data harvesting by operators with no obligation to comply with Australian privacy law [22].
The Cato Institute, a US-based libertarian think tank, characterized the ban as "a warning for online speech and security around the world," arguing that age verification infrastructure creates surveillance mechanisms that can be repurposed for broader censorship [27]. Proton, the Swiss privacy technology company, made a similar argument, warning that age assurance systems normalize mass identity verification in ways that threaten adult privacy [28].
Timothy Koskie, a research associate at the University of Sydney's Center for AI Trust and Governance, cautioned against premature judgments in either direction: "We don't have that data. That data takes a really long time to collect" [4]. The mental health effects the ban aims to address "developed over a pretty good amount of time," he noted, and meaningful assessment requires extended observation periods.
What Comes Next
The enforcement timeline is now concrete. eSafety Commissioner Inman Grant has stated that her office will decide by mid-2026 whether to initiate court proceedings against any of the five flagged platforms [6]. The Australian government also faces two legal challenges in the country's highest court over the ban's constitutionality [4].
Communications Minister Anika Wells has defended the policy, stating that "every social media account that we deactivate is an extra opportunity for young Australians to make a connection in real life" [4].
The next several months will test whether Australia's approach — the most aggressive age-based social media restriction any democracy has implemented — can be made to work through enforcement, or whether the gap between legislative intent and technological reality proves too wide to close with fines alone.
Sources (28)
- [1]Online Safety Amendment (Social Media Minimum Age) Act 2024 - Wikipediaen.wikipedia.org
Australian act of parliament prohibiting minors under 16 from holding social media accounts, passed 29 November 2024.
- [2]Social media age restrictions | eSafety Commissioneresafety.gov.au
Official eSafety Commissioner page on the social media minimum age obligation covering ten age-restricted platforms.
- [3]Australia's Social Media Ban for under 16s - Kennedys Lawkennedyslaw.com
Legal analysis of the Act including penalty structure of up to A$49.5 million and the absolute nature of the ban without parental consent exceptions.
- [4]2 months in, how are Australia's age restrictions for social media working? - NPRnpr.org
Reports 4.7 million accounts suspended in first week, Meta removing 500,000 accounts, and a 15-year-old recovering her account via Face ID.
- [5]Big Tech has been a bit rubbish at enforcing Australia's kids social media ban - The Registertheregister.com
Coverage of eSafety's March 2026 compliance report finding 70% of children still had accounts and platforms allowed unlimited age check attempts.
- [6]Five social media platforms flagged for compliance issues - eSafety Commissioneresafety.gov.au
eSafety flags Facebook, Instagram, Snapchat, TikTok and YouTube for significant compliance concerns and signals potential enforcement action by mid-2026.
- [7]Social Media Minimum Age Compliance Update March 2026 - eSafety Commissioneresafety.gov.au
Full compliance report detailing 23 information-gathering notices issued to 10 platforms and findings on age assurance gaps.
- [8]Australia's Social Media Ban and the eSafety Commissioner's Regulatory Guidance - DLA Piperprivacymatters.dlapiper.com
Analysis of the tech-neutral approach and successive validation requirements for age assurance under the SMMA obligation.
- [9]Amid wave of kids' online safety laws, age-checking tech comes of age - Rapplerrappler.com
Reports age estimation error dropping from 4.1 years in 2014 to 2.5 years currently per NIST testing, and vendor pricing at single-digit cents per check.
- [10]Opponents of Australia social media law get louder - Biometric Updatebiometricupdate.com
Coverage of facial estimation accuracy concerns at key legal thresholds and bias issues for darker skin tones and non-normative facial features.
- [11]Mental health - Australian Institute of Health and Welfareaihw.gov.au
Data showing diagnosable mental health conditions among 16-24 year olds rising from 26% in 2011-12 to 38.8% in 2020-21.
- [12]The Rise of Social Media and the Fall in Mental Well-Being Among Young Australians - Australian Economic Reviewonlinelibrary.wiley.com
Documents worsening mental well-being among Australians aged 15-24 beginning around 2007-2010, worse for young women.
- [13]Will restricting the age of access to social media reduce mental illness in Australian youth? - PubMedpubmed.ncbi.nlm.nih.gov
Concludes there is inadequate evidence to attribute the rise in youth mental illness to social media.
- [14]New data indicates moderation is key to social media use for teens - Orygenorygen.org.au
Australian youth mental health research institute data indicating moderation rather than elimination is key to healthy teen social media use.
- [15]OpenAlex - Research publications on social media and adolescent mental healthopenalex.org
Over 264,000 papers published on social media and adolescent mental health, peaking at 42,144 in 2025.
- [16]How do Australian social media users experience self-harm and suicide-related content? - BMC Public Healthlink.springer.com
National survey finding one-third of young Australians used social media to seek support for suicidal thoughts or self-harm.
- [17]Digital child protection in social networks: age verification and age-tiered regulation in Europepmc.ncbi.nlm.nih.gov
European review finding 54% of French adolescents circumvent age verification and that pure bans fall short compared to age-tiered approaches.
- [18]Proposals to ban social media for children - UK House of Commons Librarycommonslibrary.parliament.uk
Analysis of UK proposals for under-16 social media ban and comparison with the Age Appropriate Design Code approach.
- [19]Open letter to tech firms to strengthen age checks - UK ICOico.org.uk
UK Information Commissioner demands stronger age checks and better protection of children's data in March 2026.
- [20]VPNs won't save teens from social media ban - Information Ageia.acs.org.au
Reports free VPN apps entering Australia's top 30 and alternative platforms Lemon8, Yope, and Coverstar topping app stores.
- [21]Australia expects platforms to stop under-16s from using VPNs - TechRadartechradar.com
Details on device fingerprinting, metadata analysis, and SIM-origin correlation methods platforms must use to detect VPN circumvention.
- [22]The biggest risks emerge when people turn to untrusted services - Charles Sturt Universitynews.csu.edu.au
Expert warns free VPN operators may see far more user data than social media platforms, and that parents wrongly assume VPN equals privacy.
- [23]Queer youth at risk of losing mental health support access - InSight+ MJAinsightplus.mja.com.au
Reports 59% of queer young Australians accessed mental health support via social media and 74% of young trans Australians say it improves their self-image.
- [24]Social media ban an ineffective quick fix - Amnesty International Australiaamnesty.org.au
Amnesty argues government should regulate platform behavior rather than ban children from access.
- [25]Under-16s social media ban abandons LGBTIQA+ and marginalised youth - Monash Universitylens.monash.edu
Argues the ban severs online lifelines for marginalized young people and deepens isolation for Indigenous teens in remote areas.
- [26]Banning social media won't fix Australia's youth mental health crisis - University of Queenslandnews.uq.edu.au
Researchers argue ban diverts attention from systemic factors like underfunded mental health services.
- [27]Australia's Under 16 Social Media Ban: A Warning - Cato Institutecato.org
Argues age verification infrastructure creates surveillance mechanisms that can be repurposed for broader censorship.
- [28]Why the Australia social media ban may worsen privacy - Protonproton.me
Warns that age assurance systems normalize mass identity verification that threatens adult privacy.