Australia’s internet regulator has accused the world’s largest social media companies of not adequately implementing the country’s ban on under-16s using their platforms, despite legislation that came into force in December. The eSafety Commissioner, Julie Inman Grant, has expressed “significant concerns” about compliance from Facebook, Instagram, Snapchat, TikTok and YouTube, citing poor practices including allowing banned users to repeatedly attempt age verification and inadequate safeguards to stop new account creation. In its initial compliance assessment since the ban took effect, the regulator found numerous deficiencies and has now shifted from observation to active enforcement, cautioning that platforms must show they have put in place “appropriate systems and processes” to prevent children under 16 from accessing their services.
Non-compliance Issues Revealed in Opening Large-scale Review
Australia’s eSafety Commissioner has documented a concerning pattern of failure to comply among the world’s most prominent social media platforms in her inaugural review since the ban took effect on 10 December. The report demonstrates that Meta, Snap, TikTok, YouTube and Snapchat have collectively failed to implement adequate safeguards to prevent minors from using their services. Julie Inman Grant raised significant concerns about structural gaps in age verification systems, noting that some platforms have allowed children who initially declared themselves under 16 to subsequently claim they were older, effectively circumventing the law’s intent.
The findings represent a significant escalation in the regulatory response, with the eSafety Commissioner moving beyond monitoring to direct enforcement. The regulator has emphasised that merely demonstrating some children still maintain accounts is insufficient; platforms must instead furnish substantive proof that they have put in place comprehensive systems and procedures intended to stop under-16s from opening accounts in the outset. This shift demonstrates the government’s determination to hold tech giants responsible, with possible sanctions looming for companies that do not meet the statutory obligations.
- Allowing formerly prohibited users to re-verify their age and regain account access
- Enabling multiple tries at the same age assurance method without penalty
- Inadequate mechanisms to stop accounts for under-16s from being opened
- Inadequate complaint mechanisms for families and the wider community
- Lack of publicly available information about compliance actions and account removals
The Magnitude of the Challenge
The considerable scale of social media usage amongst young Australians highlights the regulatory challenge confronting both the authorities and the platforms themselves. With numerous accounts already removed or restricted since the implementation of the ban, the figures provide evidence of widespread initial non-compliance. The eSafety Commissioner’s findings suggest that the technical and procedural obstacles to enforcing age restrictions have turned out to be considerably more complex than expected, with platforms struggling to differentiate authentic age confirmations from fraudulent ones. This complexity has left enforcement authorities grappling with the fundamental question of whether current age verification technologies are sufficient for the purpose.
Beyond the technical obstacles lies a wider issue about the willingness of platforms to prioritise compliance over user growth. Social media companies have long resisted strict identity verification requirements, citing data protection worries and the real challenge of confirming age online. However, the regulatory report suggests that some platforms might not be demonstrating adequate commitment to deploy the infrastructure required by law. The shift towards active enforcement represents a pivotal moment: either platforms will substantially upgrade their regulatory systems, or they risk facing substantial fines that could transform their operations in Australia and possibly affect compliance frameworks internationally.
What the Figures Indicate
In the opening month after the ban’s introduction, Australian officials indicated that 4.7 million accounts had been restricted or taken down. Whilst this figure initially looked to demonstrate compliance achievement, further investigation reveals a more nuanced picture. The substantial number of account takedowns implies that many under-16s had been able to set up accounts in the first place, revealing that preventive controls were inadequate. Moreover, the data prompts inquiry about whether deleted profiles constitute genuine enforcement or just users deleting their accounts willingly in in light of the latest limitations.
The limited transparency concerning these figures has troubled independent observers seeking to assess the ban’s true effectiveness. Platforms have provided minimal information about their compliance procedures, effectiveness metrics, or the nature of suspended accounts. This absence of transparency makes it difficult for regulators and the general public to evaluate whether the ban is operating as planned or whether young people are merely discovering other methods to access social media. The Commissioner’s demand for comprehensive proof of systematic compliance measures reflects mounting dissatisfaction with platforms’ unwillingness to share comprehensive data.
Sector Reaction and Pushback
The social media giants have addressed the regulator’s enforcement action with a combination of assurances of compliance and doubts regarding the practical feasibility of the ban. Meta, which runs Facebook and Instagram, stressed its dedication to adhering to Australian law whilst at the same time contending that precise age verification remains a significant industry-wide challenge. The company has advocated for a alternative strategy, suggesting that robust age verification and parental approval mechanisms implemented at the application store level would be more effective than enforcement at the platform level. This stance demonstrates wider concerns across the industry that the current regulatory framework puts an impractical burden on individual platforms.
Snap, the creator of Snapchat, has taken a more proactive public stance, stating that it had suspended 450,000 accounts following the ban’s implementation and claiming to continue locking more daily. However, industry observers question whether such figures demonstrate genuine compliance or simply represent reactive account management. The core conflict between platforms’ commercial structures—which historically relied on maximising user engagement and growth—and the statutory obligation to systematically remove an whole age group persists unaddressed. Companies have long resisted stringent age verification, citing privacy issues and technical constraints, establishing an impasse between regulators and platforms over who bears responsibility for execution.
- Meta maintains age verification ought to take place at app store level rather than on individual platforms
- Snap states to have locked 450,000 accounts following the ban’s implementation in December
- Industry groups point to privacy concerns and technical obstacles as barriers to effective age verification
- Platforms maintain they are doing their best whilst challenging the ban’s overall effectiveness
Larger Inquiries About the Ban’s Effectiveness
As Australia’s under-16 social media ban enters its enforcement phase, key concerns persist about whether the legislation will achieve its intended goals or merely drive young users towards less regulated platforms. The regulator’s first compliance report reveals that despite months of implementation, significant loopholes remain—children continue finding ways to bypass age verification systems, and platforms have struggled to stop new underage accounts from being created. Critics argue that the ban’s success depends not merely on regulatory oversight but on whether young people will genuinely abandon mainstream platforms or simply shift towards other platforms, secure messaging apps, or VPNs designed to mask their age and location.
The ban’s global implications increase the complexity of assessments of its impact. Countries such as the United Kingdom, Canada, and several European nations are observing Australia’s experiment closely, exploring similar laws for their respective populations. If the ban does not successfully reduce children’s digital engagement or does not protect them from dangerous online content, it could damage the case for similar measures elsewhere. Conversely, if implementation proves sufficiently strict to truly restrict underage usage, it may inspire other administrations to implement similar strategies. The result will potentially determine worldwide regulatory patterns for years to come, making Australia’s enforcement efforts examined far beyond its borders.
Who Gains and Who Is Disadvantaged
Mental health supporters and organisations focused on child safety have endorsed the ban as a essential measure against algorithmic manipulation and contact with harmful content. Parents and educators maintain that removing young Australians platforms designed to maximise engagement could reduce anxiety, improve sleep patterns, and reduce exposure to cyberbullying. Tech companies’ own research has acknowledged the mental health risks linked to social media use amongst adolescents, adding weight to these concerns. However, the ban also removes valid applications of social media for young people—keeping friendships alive, accessing educational content, and participating in online communities around common interests. The regulatory framework assumes harm outweighs benefit, a calculation that some young people and their families challenge.
The ban’s real-world effects reaches past individual users to impact content creators, small businesses, and community organisations that rely on social media platforms. Young people who might have taken up creative careers through platforms like TikTok or Instagram now confront legal barriers to participation. Small Australian businesses that rely on social media marketing lose access to younger demographic audiences. Community groups, charities, and educational organisations struggle to reach young people through channels they previously used effectively. Meanwhile, the ban inadvertently favours large technology companies with resources to create age verification infrastructure, arguably consolidating their market dominance rather than reducing it. These unexpected outcomes suggest the ban’s effects go well past the simple goal of child protection.
What Lies Ahead for Regulatory Action
Australia’s eSafety Commissioner has announced a notable transition from hands-off observation to active enforcement, marking a pivotal moment in the implementation of the age restriction. The authority will now compile information to ascertain whether services have failed to take “reasonable steps” to restrict child participation, a regulatory requirement that surpasses simply documenting that children remain on these platforms. This strategy demands concrete evidence that companies have introduced proper safeguards and processes intended to prevent minors. The regulatory body has signalled it will pursue investigations methodically, building cases that could result in significant fines for failure to comply. This move from oversight to enforcement reveals mounting concern with the services’ existing measures and indicates that consensual engagement on its own will not be enough.
The implementation stage presents significant concerns about the adequacy of penalties and the practical mechanisms for ensuring platform accountability. Australia’s statutory provisions provides regulatory tools, but their efficacy relies on the eSafety Commissioner’s willingness to pursue regulatory enforcement and the platforms’ capability to adjust meaningfully. Global regulators, notably regulators in the Britain and Europe, will carefully track Australia’s regulatory approach and results. A successful enforcement campaign could create a model for other nations considering comparable restrictions, whilst shortcomings might undermine the entire regulatory framework. The next phase will prove crucial whether Australia’s pioneering regulatory approach produces real safeguards for adolescents or remains largely symbolic in its influence.
