Would an Australian-style youth social media ban work here?

Australia’s world-first ban on social media use by under-16s has taken effect, forcing platforms including TikTok, YouTube, Instagram and Facebook to block underage users or face fines of up to €28 million. From Dublin to Brussels, policymakers are watching closely. The question in Ireland is not only whether such a ban could work here, but whether it should—legally, practically and politically—within the European framework.

In Ireland, the immediate answer from government is cautious distance. Communications Minister Patrick O’Donovan says a blanket, Australia-style prohibition is not on the table “at this time,” though he added the state “hold[s] an Australia-type in reserve if we have to.” Instead, ministers are moving ahead with a more targeted initiative: an age-verification system designed to prevent children from accessing adult content online, potentially using an “online wallet” linked to a person’s PPS number.

- Advertisement -

That divergence captures the fault line running through this debate—between sweeping bans and tighter, risk-based measures; between fast-moving political appetite and the slower grind of legal rights and technical feasibility.

Internet law expert Simon McGarr argues the Australian approach is, in effect, a live-fire trial. “I think the Australian ban has been implemented more as an experiment than anything else,” he said, noting that Australia operates outside the European Convention on Human Rights and is not bound by the EU’s Charter of Fundamental Rights. For Ireland and other EU member states, that difference is not academic. It frames how governments must balance children’s safety with freedom of expression, access to information and proportionality in lawmaking.

“This move involves explicitly banning a group of people from taking actions, which are free for other people to do on the basis of their age,” McGarr said. States do restrict harmful products or services for minors—smoking is the classic example—where evidence of harm is established. “The difference here is that this is being implemented without any evidence presented,” he added.

That assertion goes to the heart of whether a ban would withstand scrutiny in an Irish or EU context. Here, sweeping restrictions tend to live or die on their evidence base, narrow tailoring and demonstrable necessity. A blanket prohibition that captures all platforms and all content may simply be too blunt for Europe’s rights-led system—particularly if regulators and courts see safer-by-design duties and targeted restrictions as less intrusive alternatives.

Platforms themselves are telegraphing the practical challenges. YouTube called the Australian measure “an extreme position,” warning that blanket bans can carry “extremely negative consequences.” Meta has already begun removing under-16s in Australia from Instagram, Threads and Facebook, but says compliance will be “ongoing and multi-layered”—a telling phrase that hints at the difficulty of accurately verifying age at scale, preventing workarounds and managing appeals. TikTok told users the changes “may be upsetting,” but pledged to follow the law.

Ireland’s own push on age verification for adult content exposes a parallel set of concerns. The Irish Council for Civil Liberties warns that state-run digital identity checks can become “disproportionate” and “veer into the realm of authoritarianism.” Digital Rights Ireland chair Dr. TJ McIntyre argues that giving identity data to social media companies—directly or indirectly—“will give them even more information about individuals than they have already.”

Those objections underscore the policy trade-offs. Age checks promise a gatekeeping function, but linking such systems to PPS numbers or other government identifiers risks creating a de facto digital ID infrastructure by the back door, amplifying privacy and security risks. Policymakers must also decide where verification sits in the ecosystem. Meta’s public stance is that app stores should shoulder responsibility for age assurance, not social platforms alone. That model could centralize checks and reduce duplication—but it also concentrates power in app gatekeepers and does little for web access outside app stores.

Campaign groups point to a different path: make platforms safer by default. Switch off addictive or manipulative recommendation algorithms for young users; limit data-driven advertising; harden reporting tools; restrict contact from unknown adults; enforce robust nightly downtime. These measures are incremental, but they aim squarely at reducing risk at the design level rather than chasing perfect enforcement against every underage account.

All of this unfolds against Ireland’s distinctive position in Europe’s digital economy. TikTok, Meta and Google (YouTube’s parent) all base their European headquarters here, employ thousands and contribute billions in corporation tax. That reality does not determine policy, but it shapes the diplomatic and regulatory context in which Ireland operates—especially if the EU begins a formal debate on prohibiting minors’ access to social platforms. The Department of Communications has already signaled its preference for any such decision to be taken at EU level, “with regard to the rights of children and young people.”

That EU-first stance is both pragmatic and strategic. Pan-European rules shore up legal certainty for companies and citizens, minimize fragmentation and reduce the risk of legal challenges that exploit cross-border inconsistencies. It also keeps Ireland aligned with a rights framework that demands demonstrable necessity and proportionality—thresholds that a total ban may struggle to meet without a deep pool of evidence on harms and the effectiveness of the measure compared with less intrusive options.

There is also the question of what success would look like. Would an Irish ban meaningfully reduce exposure to harmful content, or would it displace activity to VPNs, alternative services and encrypted channels? Could enforcement at platform level catch most under-16s without ensnaring older teens and young adults who fail automated checks? Meta’s own description of compliance as “ongoing and multi-layered” suggests the operational reality is messy and prone to false positives and negatives alike.

For now, Ireland’s direction of travel appears to be a layered approach: targeted age verification for adult content, pressure on platforms to adopt safer defaults for teens, and an openness to tougher measures if evidence justifies them. That stance is not indecision; it is a bet on regulation that is specific, rights-compliant and enforceable, rather than sweeping and symbolic.

Australia’s experiment will still matter in Dublin and Brussels. If it demonstrates measurable benefits without excessive collateral harms, it will strengthen advocates of tougher age-based restrictions. If it triggers large-scale evasion, rights challenges or unintended consequences, it will reinforce those urging design-level fixes and EU-level harmonization.

Could a social media ban for under-16s work in Ireland? Legally and politically, not easily. Practically, only with heavy, privacy-sensitive verification infrastructure that civil liberties groups warn against. Strategically, Ireland is likely to keep pushing for EU-wide solutions that blend age assurance for the highest-risk content with platform accountability and safety-by-design benchmarks—while watching Australia closely for lessons, good and bad.

By Abdiwahab Ahmed
Axadle Times international–Monitoring.