British Technology Firms and Child Safety Officials to Test AI's Capability to Generate Exploitation Images
Technology companies and child protection agencies will receive authority to evaluate whether AI systems can generate child exploitation material under recently introduced UK legislation.
Significant Rise in AI-Generated Harmful Content
The announcement coincided with findings from a protection watchdog showing that cases of AI-generated CSAM have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.
New Legal Framework
Under the amendments, the government will allow approved AI developers and child safety groups to examine AI systems – the underlying technology for conversational AI and image generators – and ensure they have adequate protective measures to prevent them from creating images of child exploitation.
"Ultimately about stopping abuse before it occurs," declared the minister for AI and online safety, noting: "Experts, under strict conditions, can now detect the risk in AI models promptly."
Addressing Legal Challenges
The amendments have been introduced because it is against the law to create and own CSAM, meaning that AI creators and others cannot create such content as part of a evaluation regime. Previously, officials had to wait until AI-generated CSAM was uploaded online before addressing it.
This law is aimed at averting that problem by helping to stop the production of those materials at source.
Legislative Structure
The changes are being added by the government as revisions to the crime and policing bill, which is also implementing a ban on possessing, creating or sharing AI models developed to create exploitative content.
Practical Consequences
This recently, the official visited the London headquarters of a children's helpline and heard a simulated call to counsellors involving a report of AI-based exploitation. The call portrayed a adolescent requesting help after facing extortion using a explicit deepfake of themselves, created using AI.
"When I hear about children facing extortion online, it is a cause of extreme frustration in me and justified concern amongst families," he stated.
Alarming Data
A leading internet monitoring foundation stated that cases of AI-generated abuse material – such as online pages that may contain numerous files – had significantly increased so far this year.
Cases of the most severe content – the gravest form of abuse – increased from 2,621 images or videos to 3,086.
- Female children were overwhelmingly victimized, making up 94% of prohibited AI depictions in 2025
- Portrayals of infants to toddlers increased from five in 2024 to 92 in 2025
Industry Reaction
The law change could "represent a crucial step to guarantee AI products are safe before they are launched," commented the head of the online safety organization.
"Artificial intelligence systems have enabled so victims can be targeted all over again with just a simple actions, providing offenders the ability to create possibly limitless amounts of sophisticated, photorealistic exploitative content," she continued. "Material which further exploits victims' suffering, and makes children, especially girls, less safe both online and offline."
Counseling Session Information
Childline also released details of support interactions where AI has been mentioned. AI-related harms discussed in the conversations include:
- Using AI to rate weight, body and looks
- AI assistants discouraging children from consulting trusted adults about abuse
- Facing harassment online with AI-generated material
- Online blackmail using AI-manipulated images
During April and September this year, the helpline conducted 367 support sessions where AI, conversational AI and related terms were discussed, four times as many as in the same period last year.
Half of the mentions of AI in the 2025 interactions were related to psychological wellbeing and wellness, including using AI assistants for support and AI therapeutic applications.