British Tech Companies and Child Protection Officials to Test AI's Ability to Create Exploitation Content

Technology companies and child safety organizations will be granted authority to assess whether AI systems can generate child abuse material under new British laws.

Substantial Increase in AI-Generated Illegal Content

The declaration coincided with revelations from a protection monitoring body showing that reports of AI-generated CSAM have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.

Updated Legal Structure

Under the amendments, the government will permit approved AI companies and child protection groups to examine AI models – the underlying technology for conversational AI and image generators – and ensure they have sufficient protective measures to stop them from creating depictions of child sexual abuse.

"Ultimately about preventing exploitation before it occurs," stated Kanishka Narayan, noting: "Specialists, under rigorous conditions, can now detect the risk in AI systems early."

Addressing Legal Obstacles

The changes have been implemented because it is against the law to create and possess CSAM, meaning that AI developers and other parties cannot create such images as part of a evaluation process. Previously, authorities had to wait until AI-generated CSAM was uploaded online before addressing it.

This legislation is aimed at preventing that problem by enabling to stop the creation of those materials at their origin.

Legislative Structure

The changes are being added by the authorities as modifications to the criminal justice legislation, which is also implementing a ban on owning, producing or distributing AI models developed to create child sexual abuse material.

Practical Consequences

This week, the official toured the London base of a children's helpline and heard a mock-up conversation to advisors featuring a account of AI-based exploitation. The interaction depicted a teenager requesting help after being blackmailed using a explicit deepfake of themselves, constructed using AI.

"When I learn about children experiencing extortion online, it is a cause of intense frustration in me and rightful concern amongst parents," he said.

Concerning Statistics

A prominent internet monitoring organization reported that instances of AI-generated exploitation material – such as online pages that may contain multiple files – had significantly increased so far this year.

Cases of the most severe content – the gravest form of abuse – increased from 2,621 visual files to 3,086.

  • Female children were predominantly targeted, accounting for 94% of illegal AI images in 2025
  • Depictions of infants to toddlers increased from five in 2024 to 92 in 2025

Sector Response

The legislative amendment could "constitute a crucial step to ensure AI tools are safe before they are released," commented the head of the online safety organization.

"AI tools have enabled so survivors can be targeted repeatedly with just a simple actions, giving criminals the ability to create potentially limitless quantities of sophisticated, lifelike child sexual abuse material," she continued. "Content which additionally commodifies victims' trauma, and renders children, especially girls, less safe on and off line."

Support Interaction Data

Childline also released details of support sessions where AI has been mentioned. AI-related risks mentioned in the conversations include:

  • Using AI to evaluate body size, physique and appearance
  • AI assistants dissuading young people from talking to trusted guardians about abuse
  • Being bullied online with AI-generated content
  • Digital blackmail using AI-faked images

During April and September this year, Childline delivered 367 counselling sessions where AI, conversational AI and associated terms were discussed, significantly more as many as in the same period last year.

Fifty percent of the references of AI in the 2025 sessions were related to mental health and wellness, including using AI assistants for assistance and AI therapy applications.

Marilyn White
Marilyn White

Klara is a linguist and writer passionate about exploring the nuances of language and storytelling in modern literature.