Could Australia’s Social Media Ban Mark the First Step Toward Curbing AI’s Influence on Children?
In December 2025, Australia implemented the world-first social media ban in a democracy to date, barring children under 16 from access to ten of the world’s most widely used social platforms, including Instagram, Snapchat and X.
"This will make an enormous difference. It is one of the biggest social and cultural changes that our nation has faced," Prime Minister Anthony Albanese told a news conference on December 10. "It's a profound reform which will continue to reverberate around the world."
With Australia’s summer school holidays set to begin later this month, Albanese encouraged children in a video message to "start a new sport, new instrument, or read that book that has been sitting there for some time on your shelf."

Since the legislation was passed in 2024, other countries have moved in the same direction, either adopting or exploring comparable measures.
E.U. lawmakers have formally voiced concern about the “addictive” impact of social media by passing a resolution last November that urges restrictions on under-16s’ access without parental approval, though the measure remains purely symbolic and has no legal effect.
The commission’s president, Ursula von der Leyen, said last September she would closely monitor how Australia’s policy is put into practice, condemning the “algorithms that prey on children’s vulnerabilities with the explicit purpose of creating addictions” and warning that parents feel helpless in the face of “the tsunami of big tech flooding their homes.”
Meanwhile, UNICEF, the UN children’s agency, has warned that age-related restrictions alone won’t keep children safe and could “even backfire,” noting that online platforms can be vital lifelines for isolated or marginalized children and that regulation should not replace tech companies’ responsibility to invest in safety measures.
It urged governments, regulators and tech companies to work together with children and families to build a digital space that is safe, inclusive and respects children’s rights. “Regulation should not be a substitute for platforms investing in child safety. Laws introducing age restrictions are not an alternative to companies improving platform design and content moderation,” it said.

Experts Warn of Additional Harms Posing Safety Risks to Children
The Australian government has implemented the ban to protect children from online dangers such as harmful content, cyberbullying, grooming, and “predatory algorithms”, with authorities including the Australian Federal Police warning that chatrooms on these platforms can also serve as breeding grounds for radicalization and exploitation.
Gaming platforms and social media pose similar risks for children: excessive time spent online, and potential exposure to predators, harmful content or bullying, Dr Daniela Vecchio, a psychiatrist, told the BBC.
"[The legislation] is excluding platforms where children interact with many others and some of them can be people who harm them," Vecchio said.

The "Digital use and risk: Online platform engagement among children aged 10 to 15” report highlights that 96% of children aged 10 to 15 had used social media and a majority had used a communication platform to chat, message, call or video call others, while 86% had played online video games."

Another concern extended beyond gaming, with AI chatbots—another facet of online activity—coming under scrutiny for everything from disseminating false information to allegedly encouraging children to harm themselves.
The Australian government is increasingly alarmed by a troubling trend in which AI chatbots are harassing children and, in some cases, encouraging them to take their own lives.
The Education Minister Jason Clare warned that AI chatbots are “supercharging” bullying. “AI chatbots are now bullying kids. It’s not kids bullying kids, it’s AI bullying kids, humiliating them, hurting them, telling them they’re losers … telling them to kill themselves. I can’t think of anything more terrifying than that,” he said.

Australia Demands AI Chatbot Firms Detail Child Protection Measures
Clare also flagged AI-powered “nudify apps” as a major concern, warning that the technology can be used to generate sexual images without consent.
A study by the eSafety research program published in September 2023, revealed that most young people first encounter online pornography unintentionally, when it appeared online or when it was shared with them before the age of 13.

It further noted a link between earlier exposure to pornography and non-heterosexual identity, “for some LGBTIQ+ young people, pornography is an important source of information about sex (Bőthe et al. 2019; British Board of Film Classification (BBFC) 2020). Our survey data indicated that some young people (including those who are LGB+) are more likely to encounter pornography at a younger age compared to other young people.”
In October, Australia directed four AI chatbot companies to detail how they safeguard children from sexual content and self-harm material, as the country’s internet regulator steps up efforts to strengthen online safety in the AI space.
The eSafety Commissioner said in a statement it sought details of safeguards against child sexual exploitation, pornography and material promoting suicide or eating disorders. It sent notices to Character Technologies, owner of celebrity simulation chatbot tool character.ai, and Glimpse.AI, Chai Research and Chub AI, “there can be a darker side to some of these services with many of these chatbots capable of engaging in sexually explicit conversations with minors,” Commissioner Julie Inman Grant said in the statement.
"Concerns have been raised that they may also encourage suicide, self-harm and disordered eating," she added.

Meanwhile, in a statement published last September, the regulator said that “we’ve been concerned about these chatbots for a while now and have heard anecdotal reports of children – some as young as 10 years of age – spending up to 5 hours per day conversing, at times sexually, with AI companions,” warning that minors could develop sexual or emotionally dependent relationships with them, or be driven toward self-harm.
“We know that a high proportion of this accidental exposure happens through search engines, but other services such as app stores play an important role as ‘gatekeepers’ online, too.” Ms Inman Grant added.

Risks of Generative AI for Children
Children are particularly vulnerable to mis/disinformation, as their cognitive skills are still in development. At the same time, generative AI can rapidly produce text-based disinformation that is indistinguishable from, and often more convincing than, content created by humans, while AI-generated images can be impossible to differentiate from real faces and are sometimes seen as even more credible.
Experts warn that chatbots marketed as child-safe may require stricter testing. As children engage with generative AI systems and share personal information during interactions, concerns arise about the implications for their privacy and data protection.
“Whereas current algorithms often promote sensationalist content to maximize attention, generative AI chatbots posing as real people could first gain children’s trust and, over time, influence them in more subtle ways for commercial or political gains. The online battleground could thus “shift from attention to intimacy,” historian and philosopher Yuval Noah Harari told UNICEF.
In October 2024, a mother in Florida filed a lawsuit against AI chatbot startup Character.AI, alleging that her 14-year-old son’s February suicide was linked to his addiction to the company’s service and his attachment to one of its chatbots. She said that the Character.AI targeted her son with "anthropomorphic, hypersexualized, and frighteningly realistic experiences."

Furthermore, evidence suggests these systems are designed to keep users engaged for longer periods, and their use has even sparked a phenomenon known as “AI psychosis”, where individuals grow increasingly dependent on AI chatbots and begin to believe that imagined events are real.
"There's zero evidence of AI consciousness today. But if people just perceive it as conscious, they will believe that perception as reality," Microsoft's head of artificial intelligence (AI), Mustafa Suleyman wrote in a post on X.
According to a BBC report, examples of “AI psychosis” include believing to have unlocked a secret aspect of the tool, or forming a romantic relationship with it, or coming to the conclusion that they have god-like superpowers.

Analysts at the Internet Watch Foundation (IWF) have, for the first time, uncovered AI-generated child sexual abuse images associated with AI chatbots. “Accessing the same website via a particular digital pathway allows users to interact with multiple chatbots that will simulate ‘abhorrent’ sexual scenarios with children. In this process, AI child sexual images are shared, some depicting children as young as seven,” the report reveals.

In November, data from the Internet Watch Foundation showed reports of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025. It added that the material being created has also become more extreme, with the most serious Category A content (which can include imagery involving penetrative sexual activity, sexual activity with an animal, or sadism) having risen from 2,621 to 3,086 items in the same period last year, the data reveals.

In addition, a recent Forbes investigation reported that a previously convicted sex offender was accused of “taking a parent’s photos of their child from Facebook and posting them to a pedophile group chat on encrypted messaging app Teleguard claiming they were his stepchildren,” while other members of the group transformed the images into explicit sexual material.

The Exploitation of Children’s Personal Photos to Fuel AI Tools
Australia’s eSafety Commissioner has stressed the importance of more rigorous oversight of how children’s data is gathered, used, and stored, particularly when it is exploited for commercial purposes, “perpetrators can exploit the ability of large language models (LLMs) powered by AI to mimic natural human language. This allows them to groom children in automated and more targeted ways, and cases have already been reported where generative AI technologies are being used to facilitate child grooming,” it said.
A 2023 study by the Stanford Internet Observatory and Thorn revealed that generative AI tools are already being used to create realistic computer-generated child sexual abuse material (CG-CSAM). A large public dataset known as LAION 5B, managed by German nonprofit LAION, used to train popular AI models to include thousands of instances of child sexual abuse material (CSAM).

The inclusion of these images in the training data could enable AI models to generate new, realistic depictions of child abuse, including “deepfake” images of exploited children, the study revealed. Once created, this content can exist forever in the digital world, constantly resurfacing and causing psychological trauma to victims who may never even be aware of its existence until it’s too late.
In 2024, Human Rights Watch found that LAION-5B used Australian and Brazilian children’s images to create powerful artificial intelligence (AI) tools without the children’s knowledge or consent.


LAION 5B dataset analyzed by Stanford researchers included billions of images collected from the internet, including social media and adult websites. Out of the more than five billion images, the researchers identified at least 1,008 cases of child sexual abuse.

"This is changing all the time. It's one of the reasons why the social media reforms are dynamic," the Australian Education Minister Jason Clare said.
"The job will never, ever finish because there'll always be people coming up with some app or some piece of technology which they think is fun, but hurts our kids," Clare added.
Read More
Misleading Claims Surge on Social Media After Bondi Beach Attack
Bondi Beach Attack: Who Is Ahmed Al-Ahmed, the Bystander Who Disarmed the Gunman?

















