Australia’s Social Media Ban, 3 Months In

Published
30 Mar 2026
Read time
5 min read
Category

From teens reporting feeling free to a legal challenge about the validity of the law, there has been a lot of action following the December implementation of Australia’s social media ban.

We initially published blog posts covering the content of Australia’s social media ban, plus a compliance checklist. But, three months in, we wanted to check over what’s happened so far since Australia banned social media for kids under 16. Here’s our timeline so far: 

Month One: Reddit Challenges The Social Media Ban

Reddit has reportedly filed a lawsuit challenging the law’s legality, arguing that it infringes on the right of Australians to freedom of opinion and expression (implied in the constitution). Secondarily, it has also argued that it is not a social media site and should not be included in the list of banned social media sites in Australia.

This will likely take years to pass through Australia’s court system, so we’ll keep an eye out as things progress and report back if the challenge leads to any changes to the social media ban. 

Month One: Other Countries Consider Social Media Bans

Naturally, the social media ban has attracted a divided response internationally. Tech giants, predictably, were critical of the move. While many other countries outlined plans to implement their own social media bans. Countries seriously considering social media bans at this point included Canada, the UK, Malaysia, Denmark, and France. Malaysia and Denmark signal that bans may come in as soon as this year. 

Month One: UNICEF and Amnesty Call For Stronger Protections, Not a Ban

UNICEF and Amnesty both outlined that the ban won’t fix the systemic issues young people face online. Instead, the organisations think that forcing tech giants to make online platforms safer is a better move. Amnesty went as far as calling the ban an ‘ineffective quick fix’. 

“A ban is an ineffective quick fix. What is needed are robust safeguards to ensure social media platforms stop exposing users to harm through their relentless pursuit of user engagement and their exploitation of people’s personal data. These systems are pervasive and require stronger technical and regulatory measures to adequately protect users.” – Amnesty International 

Month Two: ESafety Commissioner Confirms 4.7 Million Social Media Accounts Restricted

The ESafety Commissioner published a report covering the first month of the teen social media ‘ban’ in mid-January. The report confirmed that about 4.7 million accounts were restricted after it was confirmed they belonged to children under 16. The accounts were restricted in the first half of December. 

“I am very pleased with these preliminary results,” eSafety Commissioner Julie Inman Grant said. “It is clear that eSafety’s regulatory guidance and engagement with platforms is already delivering significant outcomes.”

Month Two: Meta Asks Australia to Rethink Ban

In a post on its ‘Australia Policy Blog’, Meta highlighted its efforts towards compliance with Australia’s social media ban for under-16’s while also calling upon the government to reconsider its approach. 

Meta, the parent company of Facebook, Instagram, and Threads, confirmed it had removed access to almost 550,000 accounts in December 2025. 

However, the remainder of the post was critical of the government’s approach. It pointed out some of the negative or ambivalent responses from teens and their parents to the ban. Then, it went on to suggest that the ban means that teens won’t have access to some age-related modifications – such as Teen Accounts. Meta notes that teens may actually lose access to protections as a result of the ban. 

Finally, Meta urged the government to consider engaging with the industry instead of outright banning social media for kids:

“we call on the Australian government to engage with industry constructively to find a better way forward, such as incentivising all of industry to raise the standard in providing safe, privacy-preserving, age appropriate experiences online, instead of blanket bans.”

Month Three: Question marks raised?

Three months in, early data and reporting provide a mixed picture of effectiveness—particularly when it comes to removing underage users from platforms.

On one hand, reported activity seems to have been significant, although the earlier reports of removing 4.7m accounts have now been questioned.

In fact, evidence suggests that removing underage users entirely has proven far more challenging. Industry and survey data indicate that around one in five under-16 users continued to access social media within two months of the ban. Other reports are even more critical: a News Corp survey cited in regulatory discussions found that up to 70% of underage users may still be active, often by falsifying their age or using workarounds such as shared accounts or alternative platforms.

These outcomes reflect structural limitations in the “reasonable steps” compliance model. Platforms are not required to verify every user’s age, and no single age assurance method has proven fully reliable or privacy-preserving. As a result, enforcement has largely focused on removing obvious underage accounts rather than eliminating access entirely.

There are also clear displacement effects. Some platforms report that younger users are migrating to less-regulated or excluded services, rather than exiting social media altogether. At the same time, early anecdotal evidence from schools suggests positive behavioural shifts—such as improved classroom engagement and reduced distraction—indicating that partial reductions in access may still deliver benefits.

Experts caution that effectiveness should not be measured solely by account removal rates. Broader indicators—such as mental health outcomes, time use, and exposure to harm—will ultimately determine success.

At the three-month mark, Australia’s regime appears effective in forcing platform accountability and reducing some underage access, but not in fully preventing it. The data underscores a key reality: age-based restrictions can raise the barrier to entry, but without robust, privacy-conscious verification, they are unlikely to completely exclude underage users.

Ready to turn insight into action?
Connect with Privacy 108.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Privacy 108 collects your name and contact details to respond to your enquiry and communicate with you about it. If you do not provide this information, we may be unable to respond. We do not disclose this information to third parties. For more information about how we handle your personal information, including how to access or correct it or make a complaint, please see our Privacy Policy or contact us at hello@privacy108.com.au.
Subscribe to our Newsletter

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

Privacy 108 collects your name and email to send you our newsletter. If you do not provide this information, we will be unable to send it to you. We may use third-party service providers (such as email marketing platforms) to distribute our communications. Some providers may store information overseas, including in the United States. For more information about how we handle your personal information, including how to access or correct it or make a complaint, please see our Privacy Policy or contact us at hello@privacy108.com.au. You can unsubscribe at any time using the link in our emails or by contacting hello@privacy108.com.au.