EU Regulators Find Meta Failing to Prevent Underage Access to Facebook and Instagram
TL;DR
The European Commission issued preliminary findings on April 29, 2026 that Meta is violating the Digital Services Act by failing to prevent children under 13 from accessing Facebook and Instagram, with regulators estimating 10–12% of users on those platforms are underage. The case opens the door to fines of up to 6% of Meta's $201 billion annual revenue—roughly $12 billion—while raising broader questions about whether any age verification system can work at scale without creating new privacy and civil liberties harms.
On April 29, 2026, the European Commission issued a formal preliminary finding that Meta Platforms is violating the Digital Services Act by failing to stop children under 13 from creating and using accounts on Facebook and Instagram . The finding marks the most significant child-safety enforcement action under the DSA to date, and it puts Meta in a familiar position: facing potentially massive EU penalties while insisting its existing protections are adequate.
The case raises questions that extend well beyond Meta. Across Europe, governments are racing to ban or restrict children from social media, regulators are building new age verification infrastructure, and researchers are still debating what the evidence actually shows about harm. Meanwhile, civil liberties groups warn that the proposed solutions could create problems worse than the ones they're meant to fix.
What the Commission Found
The Commission's preliminary findings center on three failures :
No effective age-gating at signup. Both Facebook and Instagram set 13 as their minimum age in their terms of service. But when creating an account, a child can simply enter a false birth date, and Meta has "no effective controls in place to check the correctness of the self-declared date of birth" .
Inadequate detection of existing underage accounts. The tool for reporting a minor's account is "difficult to use" and requires up to seven clicks to access the reporting form . The Commission found that Meta does not promptly identify or remove children who have already created accounts.
Insufficient risk assessment. The Commission concluded that Meta "disregarded readily available scientific evidence" about younger children's vulnerability to harms on its platforms, and failed to conduct an adequate risk assessment as required by the DSA .
The Commission cited external estimates suggesting roughly 10–12% of Instagram and Facebook users in the EU are under 13, a figure that contradicts Meta's own internal assessments .
How Much Money Is at Stake
Under the DSA, a confirmed violation can result in fines of up to 6% of a company's global annual revenue . Meta reported full-year 2025 revenue of $200.97 billion , meaning the theoretical maximum fine could exceed $12 billion.
That would dwarf anything Meta has paid before—but Meta has paid before. Since 2021, EU regulators have imposed roughly €3.6 billion in fines on the company across multiple cases :
The largest single penalty was €1.2 billion in May 2023 for illegally transferring EU user data to the United States . In 2024, Meta paid €798 million for tying Facebook Marketplace to its social network and €91 million for storing passwords in plaintext . In April 2025, it was fined €200 million under the Digital Markets Act for its "pay or consent" advertising model .
Whether these fines have changed Meta's behavior is debatable. The company has modified specific practices—suspending EU-to-US data transfers, redesigning its ad consent flow—but critics note that billions in fines represent a small fraction of annual revenue and have not fundamentally altered Meta's business model or approach to user data.
What Meta Has Built—and What It Hasn't
Meta argues it has invested significantly in age verification and teen safety. The company uses AI-driven detection systems that analyze behavior, profile data, and birthday messages to flag potential age misrepresentation . It places teen accounts in a restricted mode by default, with private settings, limited messaging from strangers, and content filtering .
In 2026, Meta launched AgeKey, which the company describes as its most serious attempt at age verification at scale . The system offers multiple verification paths: government ID upload, video selfies processed through Yoti's AI age-estimation technology (which claims a 99.3% accuracy rate for correctly identifying 13-to-17-year-olds as under 21), and social vouching .
Meta has also argued for a different structural approach: centralized age verification at the app-store level, which would shift responsibility to Apple and Google and avoid requiring each platform to independently collect identity data .
The Commission was not persuaded. Its preliminary view is that these measures, taken together, remain insufficient.
How Do Competitors Compare?
Meta is not the only platform under scrutiny, but it is the first to face formal DSA charges on child safety.
TikTok announced in January 2026 that it would deploy an internal age-detection tool across Europe that uses behavioral signals—video tone, interaction patterns, engagement style—to identify underage users . TikTok had already faced a €345 million GDPR fine in 2023 for mishandling children's data.
Snapchat prevents users aged 13–17 from changing their birth year to 18+, a design choice intended to reduce circumvention through profile edits .
YouTube uses a combination of age self-declaration, supervised accounts for children, and AI-based content restriction, though it has faced its own regulatory actions, including a $170 million FTC fine in 2019 for collecting children's data.
No platform has yet satisfied EU regulators that its age verification system is fully effective . The Commission has adopted a recommendation for member states to deploy an EU age verification app by the end of 2026, built on zero-knowledge proof cryptography that allows users to prove they meet an age threshold without revealing personal data .
The Privacy Paradox of Age Verification
The most potent critique of age verification comes not from platforms defending their business models, but from civil liberties organizations warning about the consequences for all users.
The Electronic Frontier Foundation has catalogued ten specific dangers of mandatory age verification :
Adults without government ID—approximately 15 million U.S. citizens lack a driver's license, and 2.6 million have no government photo ID at all—risk being locked out of platforms entirely . AI age-estimation algorithms perform worse on people of color, transgender individuals, and people with disabilities . Domestic abuse survivors, journalists, and whistleblowers depend on online anonymity that verification systems undermine . Young people may lose access to health information, LGBTQ+ community resources, and mental health support .
"You can't collect biometrics on a kid," Johnny Ayers, CEO of identity verification company Socure, told Fortune, describing the fundamental contradiction: verifying age without collecting identifiable information is nearly impossible with current technology .
The EU's proposed solution—zero-knowledge proofs integrated into digital identity wallets—attempts to thread this needle. Under this system, a user would prove they are over a certain age using official identification, but the platform would receive only a binary yes/no confirmation, not the underlying data . Whether this can work reliably at scale across 250 million monthly active EU users remains untested.
What the Science Says—and Doesn't Say
The debate over whether social media causes measurable psychological harm to young users has generated an enormous body of research. More than 275,000 academic papers have been published on social media and adolescent mental health, with output peaking at over 44,000 papers in 2025 .
The 2026 World Happiness Report concluded that "social media is harming adolescents at a scale large enough to cause changes at the population level," linking the rapid adoption of always-available social media in the early 2010s to population-level increases in mental illness by the mid-2010s in many Western nations . The report found "overwhelming evidence of severe and widespread direct harms" such as sextortion and cyberbullying, and "compelling evidence of troubling indirect harms" such as depression and anxiety .
But the picture is not uniform. The same report noted that in parts of the Middle East and South America, the relationship between social media use and well-being is more positive, and youth well-being has not fallen despite heavy usage . The average U.S. teen now spends nearly five hours per day on social media, including roughly two hours on YouTube, 1.5 hours on TikTok, and one hour on Instagram .
Meta's own internal research, disclosed by whistleblower Frances Haugen in 2021, showed the company was aware of specific harms. Internal studies found that one in three teen girls said Instagram made body image issues worse; more than 40% of users who reported feeling "unattractive" said the feeling began on the app; and among teens who reported suicidal thoughts, 13% of British users and 6% of American users traced the issue to Instagram .
More recently, additional whistleblowers—two current and two former Meta employees—shared internal documents with Congress alleging the company downplayed child safety research and quietly shifted its internal research policies after Haugen's disclosures .
Researchers who urge caution note that many studies are cross-sectional (capturing a single point in time rather than tracking changes), rely on self-report data, and struggle to establish causation rather than correlation . The relationship between social media and mental health varies significantly by age, gender, socioeconomic status, and type of platform use.
The Patchwork Across Europe
While the Commission pursues Meta at the EU level, individual member states are moving at different speeds and in different directions on child social media access.
France has already passed a social media ban for children under 15 and plans to introduce age verification by September 2026 . Spain is considering a stricter under-16 threshold . Denmark has struck a political deal on restricting access for children . Portugal has approved a law banning under-16s from social media, with children aged 13–15 able to access platforms only with verified parental consent . Greece plans to enforce its own ban starting in 2027 . Germany's ruling CDU-SPD coalition has expressed support for banning minors from social media, with a decision postponed until mid-2026 . The Netherlands is pushing for an under-15 ban at both national and EU levels . Poland aims to launch an age verification tool by the end of 2026 .
At least 11 EU member states have formally called for social media age verification for users under 15 . But the age thresholds vary—some countries set 13, others 15 or 16—and enforcement mechanisms differ substantially. This fragmentation creates compliance challenges for platforms operating across 27 jurisdictions and raises questions about whether children in some countries will have stronger protections than others.
There is increasing momentum toward harmonizing a minimum age requirement at 16 across the EU, but no binding agreement has been reached .
What Full Compliance Would Look Like
If Meta were required to genuinely prevent under-13 access—rather than relying on self-declared birth dates—several technical approaches exist, each with significant trade-offs:
AI-based age estimation uses facial analysis or behavioral signals to infer a user's age. Meta's partner Yoti claims high accuracy rates, but the technology raises bias concerns and is not reliable enough to serve as a sole gatekeeper .
Government ID verification is the most definitive method but creates the largest privacy risks, excludes users without ID, and faces resistance from users reluctant to upload documents to a social media company .
Parental consent flows require a verified adult to authorize a child's account, but they are easily circumvented and assume all children have accessible, cooperative parents—excluding, for instance, youth in foster care .
The EU's proposed digital wallet integration would use zero-knowledge proofs to confirm age without sharing personal data, but the infrastructure is still being built and depends on member states rolling out digital identity wallets .
At Meta's scale of roughly 250 million EU monthly active users, any of these systems would require significant engineering investment, carry ongoing operational costs, and introduce friction that could reduce user growth—a metric directly tied to Meta's advertising revenue .
What Happens Next
Meta now has the opportunity to respond to the Commission's preliminary findings before a final decision is issued . The company has already signaled disagreement, stating that "Instagram and Facebook are intended for people aged 13 and older" and that it "has measures in place to detect and remove accounts from anyone under that age" .
If the Commission confirms its findings, the resulting fine—however large—will likely matter less than the remedial measures Meta could be ordered to implement. A binding order to deploy effective age verification could force architectural changes to how Facebook and Instagram onboard users across the EU, with potential ripple effects for how other platforms approach the same problem.
The outcome will also test whether the DSA's enforcement framework can achieve what years of GDPR fines have not: a structural change in how the world's largest social media company handles its youngest and most vulnerable users.
Related Stories
Meta Removes Facebook Ads Targeting Plaintiffs in Social Media Addiction Lawsuits
Meta Ends End-to-End Encryption in Instagram Direct Messages
Greece Announces Social Media Ban for Children Under 15
Former Meta Employee Investigated for Downloading 30,000 Private Facebook Photos
Meta Removes End-to-End Encryption from Instagram Direct Messages
Sources (24)
- [1]EU says Meta is failing to keep underage users off Facebook and Instagramnpr.org
The European Union accused Meta on Wednesday of failing to stop underage users from accessing Facebook and Instagram, in violation of the bloc's digital rules.
- [2]EU finds Meta violates digital rules by not doing enough to keep children off Instagram and Facebookeuronews.com
The European Commission has preliminarily found Meta's Instagram and Facebook in breach of the Digital Services Act for failing to prevent minors under 13.
- [3]Commission preliminarily finds Meta in breach of Digital Services Act for failing to prevent minors under 13 from using Instagram and Facebookec.europa.eu
When creating an account, minors below 13 can enter a false birth date with no effective controls in place to check the correctness of the self-declared date of birth.
- [4]Meta told it's violating EU law by not doing enough to keep children off Facebook and Instagramcnbc.com
Meta disagreed with the preliminary findings, stating Instagram and Facebook are intended for people aged 13 and older and the company has measures to detect and remove underage accounts.
- [5]Meta isn't taking sufficient measures to prevent children from accessing Facebook and Instagram, states the EUbusinessstory.org
The European Commission said roughly 10-12% of children under 13 are using Instagram and Facebook, contradicting Meta's internal assessments.
- [6]EU catches Meta letting under-13s slip through on Instagram and Facebookppc.land
The European Commission found Meta's age enforcement measures are largely ineffective, with the company disregarding readily available scientific evidence about children's vulnerability.
- [7]Meta Reports Fourth Quarter and Full Year 2025 Resultsprnewswire.com
Meta's full year 2025 revenue was $200.97 billion, representing a 22% year-over-year increase. Family daily active people was 3.58 billion on average for December 2025.
- [8]1.2 billion euro fine for Facebook as a result of EDPB binding decisionedpb.europa.eu
The European Data Protection Board fined Meta €1.2 billion—the largest GDPR fine to date—for unlawfully transferring EU Facebook users' personal data to the United States.
- [9]€3.77bn in fines: Last year's Big Tech bill in Europeeuperspectives.eu
Overview of Meta's accumulated EU penalties across GDPR, DMA, and competition cases totaling billions of euros since 2021.
- [10]The EU's DMA Fine Against Meta: GDPR in Disguise?itif.org
Meta received a €200M fine in April 2025 under the Digital Markets Act for its pay or consent advertising model that forced users to accept personalized ads or pay a subscription.
- [11]AgeKey Explained: Meta's New Age Verification for Instagram and Facebookwired-parents.com
Meta's AgeKey offers multiple verification paths including government ID upload, video selfies processed through Yoti's AI age estimation, and social vouching.
- [12]TikTok to Roll Out Stronger Age Verification Across the EUtechrepublic.com
TikTok announced plans to deploy an internal age-detection tool in Europe in January 2026 using behavioral signals to identify and block underage users.
- [13]How have other platforms responded to age-gate circumvention?factually.co
Snapchat prevents users aged 13-17 from changing their birth year to 18+, a design choice to keep age-appropriate defaults and reduce circumvention.
- [14]EU countries push under-15 social media ban, Brussels presents age verification appeuronews.com
Governments across Europe are racing to block children under 15 from social media. France has passed a ban, Denmark struck a deal, Spain weighs an under-16 threshold.
- [15]What to Know About the E.U.'s New Age Verification App for Social Mediatime.com
The EU's age verification app uses zero-knowledge proof cryptography, allowing users to prove they meet an age threshold without sharing personal data.
- [16]10 (Not So) Hidden Dangers of Age Verificationeff.org
Age verification mandates create barriers along lines of race, disability, gender identity, and socioeconomic class, with AI algorithms showing documented racial bias.
- [17]Online Age Verification Laws Could Do More Harm Than Goodscientificamerican.com
Critics argue age verification systems strip anonymity protections from abuse survivors, journalists, and activists while creating attractive targets for hackers.
- [18]Social media companies are fighting the 'age verification trap'fortune.com
Platforms face a fundamental contradiction: enforcing age verification requires collecting extensive personal data, violating the privacy rights they're meant to protect.
- [19]OpenAlex: Research Publications on Social Media and Adolescent Mental Healthopenalex.org
More than 275,000 academic papers published on social media and adolescent mental health, peaking at 44,062 papers in 2025.
- [20]Social media is harming adolescents at a scale large enough to cause changes at the population levelworldhappiness.report
The 2026 World Happiness Report found overwhelming evidence of severe direct harms such as sextortion and cyberbullying, and compelling evidence of indirect harms like depression and anxiety.
- [21]Facebook documents show how toxic Instagram is for teens, Wall Street Journal reportscnbc.com
Internal Facebook research found one in three teen girls said Instagram made body image issues worse; among teens with suicidal thoughts, 13% of British users traced the issue to Instagram.
- [22]Research on Instagram and teens: Summaries from the Facebook Filesfairplayforkids.org
More than 40% of Instagram users who reported feeling unattractive said the feeling began on the app, and 25% who felt not good enough said it started on Instagram.
- [23]Whistleblowers Say Meta Buried Research on Child Safetytechstory.in
Two current and two former Meta employees shared documents with Congress alleging the company downplayed child safety research and shifted internal policies after Haugen's disclosures.
- [24]The impact of Facebook and Instagram on teens isn't so clearnpr.org
Researchers tend to agree the evidence is complex, contradictory and ultimately inconclusive on whether social media directly causes teen mental health problems.
Sign in to dig deeper into this story
Sign In