Australian social media ban for under-16s: when it starts, how it works – and which apps are on the list

10 Min Read
ItQzTK6qRtCMyeBsE8HU Canva Person Holding Iphone Showing Social Networks Folder

Australia is about to flip the switch on one of the world’s most radical online safety experiments: a nationwide rule that kids under 16 shouldn’t have social media accounts at all.

Here’s a detailed look at when it begins, how it’s meant to work, and which platforms are caught by the new rules.

When does the ban start – and who does it affect?

The new minimum age of 16 for social media accounts comes into force on
Wednesday 10 December 2025.

From that date, “age-restricted social media platforms” must take reasonable steps to stop:

  • Australians under 16 from creating new accounts, and
  • existing under-16 users from keeping their accounts.

It’s important to stress:

  • Kids and parents will not be fined. The law targets platforms, not users.
  • The rules apply to anyone in Australia under 16, including teens travelling overseas – Meta has already confirmed that under-16s on holiday will still be blocked from Facebook and Instagram.

Many platforms have effectively started early: Meta began shutting down under-16 accounts on Instagram, Threads and Facebook from 4 December, almost a week before the formal start date.


Which apps are being banned for under-16s?

The eSafety Commissioner has published a list of services it currently considers age-restricted social media platforms. As of late November 2025, that list includes:

  • Facebook
  • Instagram
  • Snapchat
  • Threads
  • TikTok
  • YouTube
  • X (Twitter)
  • Reddit
  • Twitch
  • Kick

These platforms must remove under-16 accounts and block new sign-ups from Australian users under 16.

What’s not currently restricted?

Some big names are not on the restricted list – at least for now. According to eSafety and media reports, the following services are not currently age-restricted under the law:

  • Roblox
  • Pinterest
  • Discord
  • Messenger
  • WhatsApp
  • Steam and Steam Chat
  • Google Classroom
  • GitHub
  • LEGO Play
  • YouTube Kids

However, there’s an important catch:

  • The list can change. If large numbers of teens migrate to these “safer” apps and start using them as social media, they can be reviewed and potentially added to the restricted list later.

The law doesn’t name specific brands forever; instead, it sets criteria for what counts as a social media platform (enabling social interaction, posting, linking and interacting with others), and eSafety applies those rules.


How will the ban actually work?

1. Platforms must verify age

Each platform must introduce age-assurance systems for Australian users, but the law doesn’t force a single method. Different companies are testing different tools:

  • Date-of-birth prompts for new sign-ups (already common, but now legally significant).
  • AI-based age estimation, using signals like usage patterns or facial analysis (e.g. TikTok, Snapchat).
  • ID checks, where users upload a driver’s licence or other document to a third-party age-verification provider.

This is already controversial: cybersecurity and privacy experts are especially worried about selfie-based verification and the storage of ID data, even if it’s handled by third-party services.

2. Under-16 accounts must be deactivated

Platforms are required to:

  • identify likely under-16 accounts,
  • deactivate or “freeze” them, and
  • prevent them from being used until the account holder turns 16.

Meta, TikTok and Snapchat have all indicated that affected users will be able to:

  • download their data (photos, messages, posts), and
  • have their accounts re-enabled automatically once they are old enough.

3. Appeals for those wrongly blocked

If a user is over 16 but gets flagged as underage by an AI or age-check system, they can appeal the decision and prove their age — usually by providing ID.

This is expected to be a major pain point in the early months, especially for:

  • 16–17-year-olds who look younger than they are,
  • teens with limited ID documents, and
  • families uncomfortable with sharing ID with tech firms.

4. No penalties for kids – but big fines for platforms

Under the Online Safety Act amendments and the new Social Media Minimum Age rules, only platforms face penalties:

  • Courts can impose fines of up to A$49.5 million (150,000 penalty units) on companies that don’t take “reasonable steps” to keep under-16s off their services.

There are no criminal offences or fines for individual teenagers or their parents if they circumvent the system.


Why is Australia doing this?

The government frames the law as a mental-health and safety measure for young people.

Officials and supporters point to:

  • research and leaked internal documents suggesting some platforms (notably Meta’s apps) have negative effects on teen mental health, body image and sleep.
  • evidence of exposure to harmful content — self-harm, eating disorders, violent and sexual material — pushed by recommendation algorithms,
  • cyberbullying, grooming and exploitation risks, and
  • the impact of constant “doomscrolling” on attention and wellbeing.

Prime minister Anthony Albanese has pitched the ban as giving teens “more space to be kids”, even encouraging them to use the time to read books, play sport or learn an instrument instead of being glued to their phones.

The government has also launched a national information campaign, “For The Good Of”, to explain the changes to parents, schools and young people.


What are the main criticisms?

Despite broad public concern about kids’ screen time, the law is far from universally welcomed.

Privacy and data risks

Experts warn that large-scale age verification could:

  • normalise facial scanning and ID-checks across the web,
  • create new databases that might be vulnerable to breaches or misuse, and
  • undermine anonymity for vulnerable users.

Reddit, for example, has agreed to comply but called the scheme “legally erroneous” and “arbitrary”, arguing its pseudonymous, forum-based model isn’t traditional social networking and that over-collection of data conflicts with its privacy-first approach.

Freedom of expression and access to information

Critics, including youth advocates and some media voices, say cutting teens off from major platforms:

  • restricts their access to news and political information,
  • sidelines their voices in public debate, and
  • assumes they’ll simply switch to TV or newspapers, which many don’t use at all.

One young journalist argued the ban will leave teenagers “in the dark on news and politics”, noting that TikTok, Instagram and YouTube are primary news sources for many under-18s.

Effectiveness and “whack-a-mole” concerns

There are also doubts about whether the ban will work in practice:

  • Teens may lie about their age, use VPNs, or move to smaller, less-regulated apps.
  • We’re already seeing reports of young users rushing to niche platforms and gaming services to stay connected.
  • Some critics fear a “whack-a-mole” effect as regulators chase migration from one app to another.

Tech companies have warned of unintended consequences, including disruptions to services that rely on social-media log-ins for authentication or communication.


How will enforcement be monitored?

The eSafety Commissioner, Julie Inman Grant, is in charge of overseeing compliance. Platforms must:

  • demonstrate how their age-assurance systems work,
  • show they are taking “reasonable steps” to identify and remove under-16 accounts, and
  • provide data on implementation and impact.

Australia has partnered with Stanford University and other researchers to study the outcomes over at least two years, with the results likely to influence debates in the UK, EU and beyond.

Government officials admit enforcement won’t be perfect on day one; instead, they describe it as a “progressive, risk-based rollout”, with the heaviest scrutiny on the largest platforms where children are most active.

What should families and teens do now?

For families in Australia, the immediate practical steps are:

  • Check which apps your child uses against the restricted list.
  • Talk about what will happen to their accounts from 10 December — including data downloads and deactivation.
  • Discuss alternative ways to stay in touch with friends (e.g. messaging apps that aren’t yet restricted, or in-person meet-ups).
  • Make sure kids know where to get help if they’re struggling with the change (services like Headspace and Kids Helpline are being promoted alongside the ban).

The coming months will show whether Australia’s experiment can genuinely make young people safer — or whether it simply reshapes their online lives in ways no one fully expects yet.

Either way, the under-16 social media ban is poised to become a global test case for how far governments can, and should, intervene in the relationship between teenagers, tech platforms and the algorithms that shape their daily lives.

TAGGED:
Share this Article
By Admin
Follow:
7 years in the field, from local radio to digital newsrooms. Loves chasing the stories that matter to everyday Aussies - whether it’s climate, cost of living or the next big thing in tech.
Leave a comment