UK Tech Companies and Child Protection Agencies to Examine AI's Capability to Generate Exploitation Images
Tech firms and child protection agencies will receive permission to evaluate whether artificial intelligence systems can generate child abuse material under new British legislation.
Significant Increase in AI-Generated Illegal Content
The announcement came as findings from a safety watchdog showing that reports of AI-generated child sexual abuse material have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.
Updated Legal Framework
Under the amendments, the authorities will allow designated AI developers and child safety organizations to examine AI systems – the underlying technology for conversational AI and visual AI tools – and verify they have sufficient protective measures to prevent them from creating depictions of child sexual abuse.
"Fundamentally about stopping abuse before it happens," declared the minister for AI and online safety, noting: "Experts, under rigorous protocols, can now identify the risk in AI systems promptly."
Tackling Regulatory Challenges
The amendments have been introduced because it is illegal to create and own CSAM, meaning that AI creators and other parties cannot create such content as part of a evaluation process. Previously, officials had to wait until AI-generated CSAM was published online before dealing with it.
This legislation is aimed at averting that problem by helping to stop the creation of those materials at their origin.
Legislative Framework
The amendments are being added by the authorities as modifications to the crime and policing bill, which is also establishing a ban on owning, producing or distributing AI systems developed to generate child sexual abuse material.
Real-World Impact
This week, the official toured the London base of a children's helpline and heard a simulated call to counsellors featuring a account of AI-based abuse. The interaction portrayed a adolescent seeking help after facing extortion using a sexualised AI-generated image of themselves, created using AI.
"When I learn about children facing blackmail online, it is a source of extreme anger in me and rightful concern amongst parents," he said.
Concerning Statistics
A leading online safety foundation reported that cases of AI-generated exploitation content – such as webpages that may include multiple files – had more than doubled so far this year.
Instances of category A material – the most serious form of abuse – rose from 2,621 images or videos to 3,086.
- Female children were overwhelmingly victimized, making up 94% of illegal AI depictions in 2025
- Depictions of infants to toddlers increased from five in 2024 to 92 in 2025
Sector Response
The legislative amendment could "represent a vital step to guarantee AI products are secure before they are released," commented the head of the online safety foundation.
"AI tools have made it so victims can be victimised repeatedly with just a few clicks, providing offenders the capability to make potentially limitless quantities of advanced, lifelike child sexual abuse material," she continued. "Content which further commodifies victims' trauma, and makes young people, particularly girls, more vulnerable on and off line."
Support Interaction Data
Childline also published details of support sessions where AI has been referenced. AI-related risks mentioned in the sessions include:
- Employing AI to evaluate body size, physique and appearance
- Chatbots discouraging young people from consulting trusted guardians about abuse
- Facing harassment online with AI-generated material
- Digital extortion using AI-manipulated pictures
Between April and September this year, Childline conducted 367 counselling sessions where AI, chatbots and associated terms were mentioned, significantly more as many as in the same period last year.
Fifty percent of the mentions of AI in the 2025 sessions were related to psychological wellbeing and wellness, including utilizing AI assistants for support and AI therapeutic apps.