Meta Removes Hundreds of Thousands of Accounts to Comply with Under-16 Ban
Meta Platforms Inc. has blocked and deactivated more than 550,000 accounts in Australia as part of compliance with the nation’s world-first social media ban for users under the age of 16, the company confirmed in a compliance update. The move comes after Australian legislation took effect on 10 December 2025, requiring major social media companies to ensure that children under 16 do not hold accounts on their services or face substantial penalties.
According to Meta’s figures, the deactivations occurred between 4 December and 11 December 2025, covering 330,639 Instagram accounts, 173,497 Facebook accounts and 39,916 Threads accounts believed to belong to users under the age threshold.
Australia’s Ban: A World First and Ambitious Public-Safety Initiative
Australia’s social media minimum age law — which applies to platforms including Facebook, Instagram, YouTube, TikTok, Snapchat, Reddit, X, Twitch and Threads — is aimed at reducing exposure of children to algorithm-driven content, online harms and addictive social media behaviours. Platforms were given about a year to implement age-verification measures before enforcement began.
Under the legislation, companies that fail to take “reasonable steps” to prevent under-16s from holding accounts can face fines of up to A$49.5 million (approximately US $33 million) per breach.
The policy, championed by the Albanese government as a way to protect young people’s wellbeing, represents one of the most stringent approaches globally to regulating youth access to online social networks.
Meta’s Compliance and Its Criticisms of the Law
While Meta has complied with legal requirements, the company has strongly criticised aspects of the ban’s implementation. In a blog post accompanying its account-removal update, Meta said the ban has “not met the Australian government’s objectives of increasing the safety and wellbeing of young Australians.”
Meta’s key criticisms include:
- Challenges with age verification: The tech giant argues there is no industry-wide standard for determining users’ ages online, making it difficult to reliably separate under-16s from adults.
- Migration to less-regulated platforms: Meta warned of a “whack-a-mole” effect, where teens evade restrictions by joining smaller or alternative apps not yet covered by the ban, such as Lemon8 or Yope.
- Algorithm exposure persists: The company contends that teenagers can still access algorithmically recommended content even without holding accounts, particularly in logged-out browsing modes that still tailor content to user interests.
Meta has urged the Australian government to explore alternatives — such as app-store-level age verification and parental-consent mechanisms — rather than blanket restrictions, arguing these could offer more consistent protection across the digital ecosystem.
Practical Impact on Young Australians and Families
The mass deactivations have had immediate effects on thousands of Australian families. Many users who believed they were compliant or just above the age threshold found their accounts removed, highlighting the difficulty of accurate age assessment online without reliable verification tools.
Supporters of the policy argue the move is necessary to protect children from harmful content, cyberbullying and excessive screen time, while critics — including parent groups, educators and some digital-rights advocates — express concern that it may isolate vulnerable teens who use social media for community, learning and social connection, especially for marginalised youth.
Debate Over Effectiveness and Safety Outcomes
Early data suggests a complex picture. While account removals exceed half a million in Meta’s ecosystem alone, analysts and social researchers caution that:
- Some under-16 users still find workarounds or use platforms through logged-out access.
- Others simply migrate to platforms not initially named in the ban or to apps that lack rigorous moderation and safety features.
- Mental health experts stress that offline risks — such as isolation and reduced access to supportive communities — may accompany online safety gains for some young people.
Communications Minister Anika Wells has defended the ban, saying it is designed to shift cultural norms around youth and technology use, even as the government acknowledges the implementation is evolving and enforcement will vary across platforms.
Global Attention and Potential Influence Abroad
Australia’s pioneering approach has drawn international interest, with other governments — notably in the UK — reportedly under pressure to consider similar restrictions for youth social media access. Policymakers abroad are watching this real-world experiment closely, as questions persist about efficacy, enforcement challenges and potential unintended consequences.
Proponents argue Australia’s policy could inform broader global standards for online age safety, while opponents warn that legal bans alone cannot solve complex issues tied to youth wellbeing and digital engagement.
Looking Ahead: Compliance, Challenges and Revisions
Meta has indicated that ongoing compliance will be a “multi-layered process”, requiring technological refinement, stronger age-assurance mechanisms and likely continuous dialogue with authorities.
The Albanese government has suggested future data releases from the eSafety Commissioner will provide comprehensive statistics on how the ban has impacted youth access across all mandated platforms, including TikTok, Reddit and YouTube — which are also required to comply or face fines.
The social media ban — lauded by some as a bold child-protection measure and criticised by others as impractical or overly restrictive — continues to spark debate about how democracies can safeguard children online without compromising connection, education and digital literacy.
7 years in the field, from local radio to digital newsrooms. Loves chasing the stories that matter to everyday Aussies – whether it’s climate, cost of living or the next big thing in tech.