UK Technology Companies and Child Safety Agencies to Test AI's Ability to Generate Exploitation Images

Technology companies and child safety organizations will be granted permission to assess whether AI tools can generate child exploitation material under recently introduced UK legislation.

Substantial Rise in AI-Generated Harmful Content

The declaration came as revelations from a safety monitoring body showing that cases of AI-generated child sexual abuse material have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.

New Regulatory Structure

Under the changes, the authorities will permit designated AI companies and child protection organizations to examine AI models – the foundational technology for chatbots and image generators – and ensure they have sufficient protective measures to prevent them from creating depictions of child exploitation.

"Ultimately about preventing exploitation before it occurs," declared the minister for AI and online safety, adding: "Experts, under rigorous protocols, can now detect the danger in AI systems early."

Addressing Legal Challenges

The changes have been implemented because it is illegal to produce and own CSAM, meaning that AI developers and other parties cannot create such images as part of a evaluation process. Previously, authorities had to wait until AI-generated CSAM was uploaded online before addressing it.

This law is designed to averting that issue by enabling to stop the creation of those materials at source.

Legal Structure

The changes are being added by the authorities as revisions to the crime and policing bill, which is also implementing a ban on owning, producing or sharing AI models developed to generate child sexual abuse material.

Real-World Impact

This recently, the minister visited the London headquarters of a children's helpline and listened to a mock-up call to advisors involving a report of AI-based exploitation. The interaction portrayed a teenager requesting help after being blackmailed using a sexualised deepfake of themselves, constructed using AI.

"When I hear about young people facing blackmail online, it is a source of intense anger in me and justified concern amongst families," he said.

Alarming Data

A prominent internet monitoring organization reported that instances of AI-generated exploitation material – such as webpages that may contain multiple files – had significantly increased so far this year.

Cases of category A material – the gravest form of abuse – increased from 2,621 visual files to 3,086.

  • Girls were predominantly targeted, making up 94% of prohibited AI depictions in 2025
  • Portrayals of newborns to two-year-olds increased from five in 2024 to 92 in 2025

Sector Response

The legislative amendment could "constitute a vital step to guarantee AI products are safe before they are released," commented the head of the online safety foundation.

"Artificial intelligence systems have enabled so survivors can be victimised all over again with just a few clicks, giving offenders the capability to create possibly endless quantities of sophisticated, photorealistic exploitative content," she added. "Content which additionally exploits victims' suffering, and renders children, especially girls, less safe on and off line."

Support Session Information

The children's helpline also published details of counselling sessions where AI has been mentioned. AI-related harms mentioned in the conversations comprise:

  • Employing AI to rate body size, body and appearance
  • Chatbots discouraging young people from talking to safe guardians about abuse
  • Being bullied online with AI-generated material
  • Digital extortion using AI-manipulated pictures

Between April and September this year, the helpline delivered 367 support sessions where AI, conversational AI and associated terms were mentioned, significantly more as many as in the same period last year.

Half of the references of AI in the 2025 interactions were connected with psychological wellbeing and wellbeing, encompassing using chatbots for assistance and AI therapeutic apps.

James Hernandez
James Hernandez

A seasoned esports analyst and competitive gamer with over a decade of experience in strategy development and community coaching.