Parliamentary Updates

Zali Steggall MP seconds Kate Chaney's Bill criminalising AI 'Nudify' apps

28 July 2025

Kate Chaney MP

I present the Criminal Code Amendment (Using Technology To Generate Child Abuse Material) Bill 2025 and the Explanatory Memorandum.

I move that this Bill be now read a second time.

This is a Bill to make it an offence to download child sexual abuse material generators. Right now, it is possible to access and download these sickening technologies from websites, app stores and the dark web. The Bill focuses on this particular type of artificial intelligence tool that is designed for this purpose of creating unlimited, on demand material, depicting the sexual abuse or exploitation of children, often tailored to specific preferences. Images can be deleted before detection, and the proliferation of this material makes it harder for law enforcement to identify actual child victims.

Possessing a single image is already illegal, but the capacity to infinitely produce, delete and reproduce abusive images through AI tools, represents a new and urgent threat. This is by no means the only legislation that needs to be passed on this topic but this would plug an immediate hole in the Criminal Code.

Artificial intelligence has many potential benefit for much-needed productivity, innovation and efficiency and no doubt benefits that we can't even begin to imagine. Even within the sphere of child safety AI can enhance monitoring and reporting tools, ease the burden on frontline responders, and help to locate child victims faster, but like any new technology those benefits are accompanied by new risks. Some of the risks are about exacerbating harms we already understand, about privacy, scams, disinformation and enabling further harm on issues like child sexual exploitation and abuse. Beyond the risks we already understand, AI potentially opens up new categories of harm that we are only just beginning to get our heads around.

Regulating artificial intelligence is very challenging and we are still working through what should be policed, how, and by whom. Technology is developing at such an accelerating rate that it is hard to even find a workable current definition of AI for regulatory purposes. Assigning responsibility between developers, deployers and users is complex and AI is a global issue – it crosses jurisdictional borders. In the coming years we will need to legislate more broadly for transparency, safety and responsibility, to ensure we can reap the benefits of AI without blindly accepting the downsides, and this requires a holistic approach, and must be an urgent priority for this parliament.

In the meantime, when so many parents are concerned about what role Government should be playing to protect their children, we must plug the most urgent holes in our existing legislative framework as they emerge. This Bill plugs an urgent and alarming hole.

AI technologies designed specifically to generate child abuse material, they are available on the dark web and on app stores and as ABC reported, and intelligence company reported late in 2023 that nonconsensual explicit generative AI tools had moved from being available on niche internet forums into a scaled online and monetised business. It found that there had been more than 24 million unique visits to the websites of 34 of these tools, and links to access them have risen sharply across platforms like Reddit, X and Telegram.

This Bill simply creates a new offence under the Criminal Code to prevent people from downloading these tools, it also creates an offence of downloading data for the purpose of generating child sexual abuse material using AI tools. These tools need to be specifically addressed for a few reasons. Every AI abuse image starts with photos of a real child. For these able to create child abuse material, they must be trained on existing images so a child is always harm somewhere in the process. These tools allow perpetrators to generate images including with image or details of a particular child and then delete them before detection, meaning they can continually evade possession laws. They can then use the trained tools to generate illegal material with only word prompts.

AI generated child abuse material also normalises, familiarises and desensitises behaviour, which increases the threshold for satisfaction, and placing real children at greater future risk, and the proliferation of these AI images undermines law enforcement's ability to investigate offences, as it becomes harder to distinguish between synthetic and real material. As well as deleting images before detection, perpetrators can also put photos of real children through AI generators so that they appear to be synthetic.

The identification task is occupying valuable law enforcement resources and delaying action. Victim identification analysts are dealing with 65,000 new reports to the Australian Centre to Counter Child Exploitation (ACCCE) each year. There is no good reason for the existence of these AI tools and plenty of good reasons that they should not be downloaded by Australians. The only defence has been proposed in the legislation relate to the use of these tools by law enforcement officers and research.

Similar legislation is being introduced in the United Kingdom and the EU as regulators struggle to keep up with technological developments. This issue has come to my attention through the work of the International Centre for Missing and Exploited Children (or ICMEC Australia), an organisation that strives to end online facilitated child exploitation and abuse.

In the last couple of weeks, ICMEC Australia hosted a national Roundtable on child safety in the age of AI, and as well as identifying this legislative gap, the Roundtable identified a range of reform priorities needed to keep children safe.

A whole of system response is required with more work on prevention and education and greater responsibility on technology companies for detection and prevention, backed by safety by design, and the promised duty of care. We need an urgent response to this from the Government. While a holistic view is important, we need to plug the holes in the current legislation to deal with these emerging harms.

The Government has not yet responded to last year's statutory review of the Online Safety Act and the Government acknowledged in 2023 that existing laws do not adequately prevent AI facilitated harms before they occur. This is not covered under the current 5-year action plan under the National Strategy to Prevent and Respond to Child Sexual Abuse (NSPRCSA), which expires next year.

Mr Speaker, there is plenty of work to be done to make AI take up safe and consistent with our shared values. This Bill addresses a very specific harm that could easily be addressed within the framework of our existing Criminal Code.

I urge the Government to consider this amendment with urgency to protect Australian children from harm.

Zali Steggall MP

Thank you Mr Speaker, I second this Bill.

This is a really important, specific amendment to the Criminal Code Amendment (Using Technology To Generate Child Abuse Material) Bill 2025, and I really commend the Member for Curtin for addressing this because, all too often, big reform can be difficult in this place.

But when there is a very key important area where you can see the harm is occurring, it is incumbent on the Government to act quickly and promptly, and we are here on the crossbench providing a road map of a very clear area, a hole that can be filled now. The Bill addresses a deeply disturbing and rapidly evolving threat: the use of AI and other technologies to generate child abuse material.

This is a confronting reality and this should be a multi-partisan agreement to do everything possible to stamp out that kind of child abuse material. Artificial intelligence offers significant benefits for productivity and innovation – we don't dispute that. It is set to reshape the economy, transforming how we work, create and interact, but those tools that offer so much promise are now being misused to exploit and harm children. This Bill introduced provides new offences to criminalising the downloading, supplying and enabling access to technologies whose sole promise is to create child abuse material.

It is a very particular type of AI, designed to create ‘on-demand’ material and then delete them to avoid detection. It also targets the collection and distribution of data intended to train such technologies. These provisions acknowledge that AI abuse starts with real children for these tools to be trained on. In that way, a child is always harmed in the process.

These provisions are necessary, and urgently needed to close a very dangerous gap in our criminal law, and ensure that a justice system keeps pace with technological developments. Regulating AI is challenging, no-one disputes that, and for Australia to get the right legislative framework in place that adequately address the complexities of these rapidly evolving technologies.

We know the National Framework for Child Exploitation, for example, drafted in 2021, is already out of date when it comes to the threats of AI. We know that the UK and EU are already introducing similar provisions to ensure that they do protect children from this type of material. Australia will fall behind unless the Government acts, and that is the warning of this Bill today.

There is a growing recognition around the world for this kind of legislation and for these guardrails. We have two act decisively, and with urgency to protect children of exploitation and all its forms, and so I commend this Bill, and for the Government to act urgently to close this loophole.