Ireland weighs a ban on children’s social media use, department says
Ireland is weighing whether to bar children and young people from social media — a so-called “digital age of majority” — as governments worldwide watch Australia’s landmark under-16 ban for lessons on enforcement, rights and technical feasibility.
Officials in Dublin told reporters the country prefers a coordinated European Union approach, noting the issue should be considered “by the EU and EU member states together” and that any move must respect the rights of children and young people.
- Advertisement -
The European Commission has signalled it will examine the problem at the bloc level. President Ursula von der Leyen announced in September the creation of an expert panel to study options — including Australia’s new rules — and to advise on the best way forward for Europe on social media regulation.
Australia’s law, the first of its kind, prohibits children under 16 from accessing major social media platforms. Companies face fines of Aus$49.5 million (€28 million) if they fail to take “reasonable steps” to comply. The measure is intended to reduce young users’ exposure to harms linked to social media, from harmful content to manipulative design.
Meta, the parent company of Instagram, Threads and Facebook, said it began removing accounts it believes belong to users under 16 in Australia ahead of the ban’s enforcement date and expects compliance to be “an ongoing and multi-layered process.” The company said younger users could save and download their histories and that accounts would be restored “exactly as you left it” once a user turns 16.
Instagram estimates about 350,000 Australian users aged 13 to 15 will be affected. Some widely used apps — including Roblox, Pinterest and WhatsApp — are currently exempt, although regulators say the list remains under review.
Meta has argued that app stores, not individual platforms, should be required to verify ages and obtain parental approval when teens under 16 download apps, so that verified age information can be shared across services. The company also asked regulators to “hold app stores accountable” for verification rather than forcing multiple checks across many apps.
YouTube has raised a different objection, saying the ban could make under-16s “less safe” because they can still visit the site without an account and would lose safety filters that apply to signed-in users. Australia’s Communications Minister Anika Wells rejected that argument as “weird,” saying YouTube should fix unsafe, age-inappropriate content on its platform and that the law is meant to make it easier for kids to “chase a better version of themselves.”
The new rules have already prompted legal and practical pushback. The Digital Freedom Project, an internet rights group, filed a High Court challenge last week calling the laws an “unfair” assault on freedom of speech. Guidance from Australia’s eSafety regulator acknowledges teens will try to circumvent restrictions — uploading fake identification, misrepresenting their age, or using artificial intelligence to alter images — and warns that no solution is likely to be wholly effective.
That admission helps explain why regulators and child-safety experts say Australia’s experiment will be closely watched internationally. Malaysia has signalled plans to bar under-16s from signing up to social media next year, New Zealand is introducing a similar ban, and EU policymakers are debating harmonised approaches to age assurance and platform obligations.
Age assurance — the suite of technologies and processes platforms would use to establish a user’s age — is central to the debate. Critics warn invasive verification systems could create privacy risks and exacerbate digital exclusion, while proponents say robust checks are necessary to protect minors from pervasive, targeted harms online.
Ireland has begun doubling down on cross-border cooperation. Coimisiún na Meán, Ireland’s media regulator, signed a Memorandum of Understanding with Australia’s eSafety Commissioner to share data, methodologies and best practices on online safety. Niamh Hodnett, Online Safety Commissioner at Coimisiún na Meán, said the MoU will “help us work towards ensuring a safer and more positive online environment for all our citizens.”
Julie Inman Grant, Australia’s eSafety Commissioner, framed the agreement as part of a broader effort to “embed safety into the very architecture of digital products and services,” and highlighted alignment with Ireland on protecting minors and implementing age-assurance technologies.
At the same time, broader regulatory friction is emerging. The European Commission opened an investigation into Meta over its roll-out of artificial intelligence features on WhatsApp and concerns the company may have used its market position to block rival AI chatbots. WhatsApp has called the claims baseless.
Policymakers face a delicate balancing act: designing enforceable rules that protect young people without unduly restricting children’s rights, privacy or access to educational and social resources. Australia’s law will provide real-world evidence about enforcement costs, the effectiveness of age-assurance techniques, and unintended consequences such as increased use of unregulated channels.
For countries such as Ireland considering action, the choice is whether to move alone — risking fragmentation and varied protections across jurisdictions — or to coordinate through the EU, where a single set of rules could ease enforcement and standardize safeguards for minors across member states.
The coming months will test not only technology and legal strategy but also political will: can governments craft regulation that deters harm, limits circumvention, protects rights, and incentivizes platforms and app stores to build safety into services by design? Australia’s experiment will be one of the first answers.
By Abdiwahab Ahmed
Axadle Times international–Monitoring.
