Australia is set to roll out a world-first legal requirement to restrict children under 16 from using many popular social-media platforms. Here’s a breakdown of how it will work, who it affects, and what parents and teens should prepare for.
What the law says
- The legislation is known as the Online Safety Amendment (Social Media Minimum Age) Act 2024.
- It amends the broader Online Safety Act 2021 to require social-media services to take “reasonable steps” to prevent users under 16 from creating or maintaining accounts.
- The law takes effect on 10 December 2025.
Which platforms are covered
- Platforms categorised as “age-restricted social media services” will be required to enact the rules.
- As of the announcement, the list includes: Facebook, Instagram, TikTok, Snapchat, X (formerly Twitter), YouTube, Reddit, and now streaming-based platform Twitch.
- Some services are excluded: platforms whose sole or primary use is messaging, education, gaming or certain supportive services may be exempt.
How it will be enforced
- Platforms must identify or verify the age of users and prevent under-16s from holding accounts. They need to show they are taking “reasonable steps.”
- The national regulator, the eSafety Commissioner, will monitor compliance and can impose fines up to A$49.5 million for systemic non-compliance.
- Platforms will use age-assurance tools such as ID verification, video-selfies, facial age estimation or data they already hold about accounts.
What changes for teens and parents
- From 10 December, Australian users under 16 will not be allowed to create new accounts on those age-restricted platforms.
- Existing accounts for under-16s will need to be deactivated or blocked by platforms. Some platforms have already started giving users notice.
- Teens can still access many online services: those not covered by the ban such as certain educational or messaging apps remain accessible.
- Parents and carers are encouraged to discuss the changes with their children: what this means, alternatives online, and how to prepare.
Challenges and concerns
- Age-verification technology is imperfect: platforms acknowledge there may be errors (e.g., mistakenly blocking 16- or 17-year-olds).
- Critics say the law may push younger users to less-regulated or underground platforms rather than reduce risk.
- Balancing privacy with safety is tricky — the law discourages blanket age-verification of all users but requires targeted steps.
- Some argue that simply banning access does not replace education, digital literacy and support for young people online.
Why Australia did this
- The government cites concerns about the mental health, wellbeing, attention and sleep of young people exposed to social media’s design features (notifications, algorithms, peer pressure) at an early age.
- By delaying full account access until 16, the policy aims to give younger teens more time to develop digital literacy and resilience before engaging in highly interactive, algorithm-driven environments.
What happens next
- Platforms must finalise their compliance plans ahead of the December date. Some already doing so.
- Parents and children should review existing accounts, back up any data they want to preserve, and prepare transitions.
- Schools, education services and youth-organisations will need to adjust: alternative digital spaces for children under 16 may need support.
- The eSafety Commissioner will publish guidance, FAQs and updates to help stakeholders understand and implement the changes.
Final thought
Australia’s plan to restrict social-media accounts for under-16s is pioneering — and steeped in risk, both technical and social. If implemented well, it could reshape how young people engage online and how platforms treat younger users. But success will depend on enforcement, technology, education and the support given to teens navigating other digital spaces.
7 years in the field, from local radio to digital newsrooms. Loves chasing the stories that matter to everyday Aussies – whether it’s climate, cost of living or the next big thing in tech.