When police searched the computer of 29-year-old IT worker Aaron Pennesi in March, they were looking for the malware he used to steal personal information from his colleagues at The Forest High School on Sydney’s northern beaches.
That wasn’t all they found. In an all-too-common turn of events, police stumbled upon child sexual abuse material on a laptop seized for another reason. But something was different about this content.
The scenes depicted weren’t real.
Instead, Pennesi had used a popular AI-generation website to create the child abuse material using search prompts that are too grotesque to publish.
In an even more severe case, a Melbourne man was sentenced to 13 months in prison in July last year for offences including using an artificial-intelligence program to produce child abuse images.
Police found the man had used an AI image-generation program and inputted text and images to create 793 realistic images.
As cases involving the commercial generation of AI child abuse material that is completely original and sometimes indistinguishable from the real thing become increasingly common, one expert says the phenomenon has opened a “vortex of doom” in law enforcement’s efforts to stamp out the content online.
Naive misconceptions
As the tug of war over the future of AI oscillates in the court of public opinion, one of the more terrifying realities that suggests it could do more harm than good is the ease with which it enables offenders to produce and possess child sexual abuse material.
The widespread adoption of image-generation models has been a boon for paedophiles seeking to access or profit from the content online.
Interpol’s immediate past director of cybercrime Craig Jones says the use of AI in child sexual abuse material online has “skyrocketed” in the past 12 to 18 months.
“Anybody is able to use an online tool [to access child sexual abuse content], and with the advent of AI, those tools are a lot stronger. It allows offenders to do more,” Jones said.
Craig Jones, the immediate past director of cybercrime at INTERPOL is now an independent strategic advisor to cybersecurity firm Group-IB.
The AFP-led Australian Centre to Counter Child Exploitation, or ACCCE, received 63,547 reports of online child exploitation from July 2024 to April 2025. That’s a 30 per cent increase on the previous financial year, with two months remaining.
“We’re seeing quite a significant increase in what’s occurring online,” AFP Acting Commander Ben Moses says, noting that those statistics don’t differentiate between synthetic and real child abuse content.
That’s in line with the legal treatment of the issue; possessing or creating the content in either form is punishable under the same offences.
But a common misconception is that AI-generated material shouldn’t be taken as seriously or is not as harmful as the traditional type because no child is abused in the creation of the material.
Moses says that while identifying real victims will always be the ACCCE’s priority, AI-generated content is being weaponised against real children.
“It can still be very harmful and horrific. [It] can include the ability … to generate abuse in relation to people they know. For those victims, it has significant consequences.”
In 2024, a British man was jailed for 18 years for turning photographs of real children, some younger than 13, into images to sell to other paedophiles online. The sentencing judge called the images “chilling”.
In another British example, a BBC report in 2024 found evidence that an adults-only VR sex simulator game was being used to create child models for use in explicit sex scenes, and that models had been based on photos taken of real girls in public places.
“The other aspect of it, and what may not be well known, is cases where innocent images of children have been edited to appear sexually explicit, and those photos are then used to blackmail children into providing other intimate content,” Moses says.
Moses says this new “abhorrent” form of sextortion, and how it opens up new ways for offenders to victimise minors, is of great concern to the ACCCE.
Professor Michael Salter, the director of Childlight UNSW, the Australasian branch of the Global Child Safety Institute, calls the misconception that AI-generated abuse material is less harmful “really naive”.
“The forensic evidence says that it is a serious risk to children.”
“The emergence of AI has been something of a vortex of doom in the online Child Protection space.”
Professor Michael Salter
Salter says the demand for synthetic material primarily comes from serious offenders and that, generally, they also possess actual child sexual abuse content.
“It’s also important to understand that a lot of the material that they’re creating is extremely egregious because they can create whatever they want,” he said.
“The sort of material they’re creating is extremely violent, it’s extremely sadistic, and it can include imagery of actual children they want to abuse.”
Tech-savvy paedophiles
AI child sexual abuse material first crossed Michael Salter’s desk around five years ago. In that time, he’s witnessed how offenders adapt to new technologies. As AI advanced, so did the opportunities for paedophiles.
He explains that AI was first used to sharpen older material and later to create new images of existing victims. It has now proliferated into offenders training their own engines or using commercially available image-generation sites to create brand-new material.
This can include deepfake videos featuring real people. But Salter says what is more common is still-image generation that is frighteningly readily available.

UNSW Professor Michael Salter is the director of the Australasian division of the Global Child Safety Institute’s Childlight initiative. Credit: UNSW Sydney
“We have commercial image generation sites that you can go to right now, and you don’t even have to look for child sexual abuse material because the generation of [it] is so popular that these sites often have trending pages, and I’ve seen sections where the keyword is ‘pre-teen’, or ‘tween’, or ‘very young’.”
In a 2024 report, the Internet Watch Foundation (IWF) found a 380 per cent increase in reported cases of AI-generated child sexual abuse content online, noting that the material was becoming “significantly more realistic” and that perpetrators were finding “more success generating complex ‘hardcore’ scenarios” involving penetrative sexual activity, bestiality or sadism.
“One user shared an anonymous webpage containing links to fine-tuned models for 128 different named victims of child sexual abuse.”
Internet Watch Foundation’s July 2024 AI child sexual abuse material report
The IWF found evidence that AI models that depict known child abuse victims and famous children were being created and shared online. In some of the most perverse cases, this could include the re-victimisation of “popular” real-life child abuse victims, with AI models allowing perpetrators to generate new images of an abused minor.
The report acknowledged that the usage of these fine-tuned models, known as LoRAs, likely went much deeper than the IWF could assess, thanks to end-to-end encrypted, peer-to-peer networks that were essentially inaccessible.
Moreover, Australia’s eSafety Commission warns that child sexual abuse material produced by AI is “highly scalable”.
“[It requires] little effort to reproduce en masse once a model is capable of generating illegal imagery,” a spokesperson said.
Commercial interests
The rapid escalation of the amount of content available online is partially attributed to how AI has enabled the commercialisation of child sexual abuse material.
“Offenders who are quite adept at creating material are essentially taking orders to produce content, and this material is increasingly realistic,” Salter says.
Jones says that in the span of his career, he’s seen the provision of child sexual abuse content go from physical photocopies being shared in small groups to it being available online in a couple of clicks.
“Unfortunately, there is a particular audience for child sexual abuse material, and what AI can do is generate that content, so [commercialisation] is serving a demand that is out there.”
In one of the biggest stings involving an AI-child abuse enterprise, Danish police, in conjunction with Europol, uncovered a subscription service that commercialised access to the content. The global operation saw two Australian men charged, and 23 others apprehended around the world.

A phone seized under Operation Cumberland, a global sting targeting the alleged production and distribution of child abuse material generated by artificial intelligence.Credit: Australian Federal Police
“There were over 237 subscribers to that one matter,” Moses says of Operation Cumberland. “When we talk about proliferation and people profiting from this type of activity, this is of great concern to us.”
Swamped by the growing sea of content, officers now face the difficulty of identifying which situations depict real children being abused, as opposed to an AI-generated child who doesn’t exist.
“It also means that police have to spend quite a lot of time looking at material to determine whether it’s real or not, which is quite a serious trauma risk for police as well,” Salter says.
Moses from the ACCCE agrees that it’s “very difficult work” for officers. “Whilst it is very confronting material, it doesn’t compare to the trauma that child victims endure, and there’s very much a focus on identifying victims.”
The influx of AI-generated content has complicated its mission in many ways, Moses says, including by robbing crucial resources from ACCCE’s primary goal of rescuing children who are being abused.
“It takes a lot of time to identify real victims, and the concern for us … is that the [AI-generated content] is becoming increasingly harder [to detect], and it takes time away from our people who are trying to identify real victims.”
Law enforcement ‘overwhelmed’
While prosecutions for offences involving fake abuse material have increased, the rate hasn’t kept up with the pace of the increase in the amount of content found online.
Salter says resourcing is one of the biggest challenges facing law enforcement.
“Law enforcement is so overwhelmed with really egregious online sexual exploitation cases … their primary priority is to prevent and rescue the abuse of actual kids.”
He says it’s a struggle he’s heard across all jurisdictions.

Australian Federal Police arrest a Queensland man, charged with four counts of possessing AI-generated child abuse material.Credit: Australian Federal Police
“They’re really struggling in terms of people power, in terms of access to the technology that they need to conduct these investigations and to store that amount of material,” Salter says.
“There needs to be a huge uplift right across the law enforcement space.”
Additionally, AI-generated child sexual abuse content requires a whole reset of the way the content is detected.
Old machine methods of detecting the content online involved scraping for verified abuse content, which means it has to have already been assessed by a human as illegal content to be detected.
“The obvious challenge we see with AI-generated material is that it’s all new, and so it’s very unlikely, through current detection technologies, that we can proactively screen it,” Salter says.
Unregulated threat let loose
It’s a global issue that crosses jurisdictions and exists on the internet’s severely under-regulated new frontier. But that hasn’t deterred Australia’s eSafety commissioner, Julie Inman Grant, from introducing world-first industry standards to hold tech companies to account for the content they platform.
The standards came into force in December 2024 and require storage services like Apple’s iCloud and Google Drive, messaging services, and online marketplaces that offer generative AI models to prevent their products from being misused to store or distribute child sexual abuse material and pro-terror content.

ESafety Commissioner Julie Inman Grant during a Senate estimates hearing at Parliament House in Canberra in 2024.Credit: The Sydney Morning Herald
“We have engaged with both AI purveyors and the platforms and libraries that host them to ensure they are aware of their obligations under the standards,” an eSafety commission spokesperson said.
“We believe the standards are a significant step in regulating unlawful and seriously harmful content and align with our broader efforts to ensure that AI tools, such as those used to create deepfakes, are held to the highest safety standards.”
The recent passage of the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 also expanded on the available criminal offences relating to non-consensual, sexually explicit AI-generated material.
While international companies can face multi-million dollar penalties for breaches of the eSafety Commission’s standards in Australia, major tech players like Meta are increasingly adopting end-to-end encryption, which means even the companies themselves can’t see what content they’re hosting, let alone law enforcement.
Interpol works at the forefront of these issues, often acting as a bridge between authorities and the private sector. Jones observes that while interventions like Australia’s new standards play an important role in setting high standards for tech companies, encryption and other privacy policies make it “very hard for law enforcement to get those data sets”.
International cooperation is crucial for successfully prosecuting commercial child sexual abuse content cases, and Jones says that in best practice examples, when a global chain is identified, the tech industry is brought in as part of the investigation.
“I’m seeing more of an involvement in the tech sector around supporting law enforcement. But that’s sometimes at odds with encryption and things like that,” Jones says.
Loading
“I think the tech industry has a duty of care to the communities that they serve. So I don’t think it’s good enough to say, ‘Oh, well, it’s encrypted. We don’t know what’s there’.”
Salter takes a more pessimistic view of the tech industry’s actions, arguing that most companies are moving away from, not towards, proactively monitoring the presence of child sexual abuse content.
“The emergence of AI has been something of a vortex of doom in the online child protection space,” Salter says.
Online child protection efforts were already overwhelmed, he says, before the tech sector “created a new threat to children” and “released [it] into the wild with no child protection safeguards”.
“And that’s very typical behaviour.”
Start the day with a summary of the day’s most important and interesting stories, analysis and insights. Sign up for our Morning Edition newsletter.
Read the full article here