British Technology Firms and Child Protection Officials to Examine AI's Capability to Generate Exploitation Content

Tech firms and child safety agencies will be granted authority to assess whether artificial intelligence systems can generate child abuse images under new UK laws.

Significant Increase in AI-Generated Harmful Material

The declaration came as revelations from a protection monitoring body showing that reports of AI-generated CSAM have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.

Updated Legal Structure

Under the amendments, the government will permit designated AI companies and child protection groups to inspect AI models – the underlying systems for chatbots and image generators – and verify they have adequate safeguards to prevent them from producing images of child exploitation.

"Ultimately about preventing abuse before it happens," declared Kanishka Narayan, noting: "Specialists, under strict conditions, can now identify the danger in AI models early."

Tackling Regulatory Obstacles

The changes have been implemented because it is against the law to produce and own CSAM, meaning that AI creators and others cannot generate such content as part of a evaluation regime. Previously, officials had to delay action until AI-generated CSAM was uploaded online before addressing it.

This legislation is designed to averting that issue by helping to halt the creation of those materials at source.

Legislative Framework

The amendments are being introduced by the government as revisions to the crime and policing bill, which is also establishing a prohibition on possessing, creating or sharing AI systems developed to create exploitative content.

Practical Impact

This recently, the official toured the London headquarters of a children's helpline and heard a simulated conversation to advisors involving a report of AI-based exploitation. The call portrayed a teenager requesting help after facing extortion using a explicit AI-generated image of himself, constructed using AI.

"When I learn about children facing blackmail online, it is a source of extreme frustration in me and justified concern amongst parents," he stated.

Concerning Statistics

A leading online safety organization reported that instances of AI-generated exploitation material – such as online pages that may include numerous images – had significantly increased so far this year.

Cases of category A material – the gravest form of abuse – rose from 2,621 images or videos to 3,086.

  • Girls were predominantly targeted, accounting for 94% of prohibited AI depictions in 2025
  • Portrayals of infants to two-year-olds increased from five in 2024 to 92 in 2025

Industry Response

The law change could "represent a vital step to ensure AI products are safe before they are launched," stated the head of the online safety foundation.

"AI tools have enabled so survivors can be targeted repeatedly with just a few clicks, giving offenders the capability to create potentially limitless amounts of sophisticated, photorealistic exploitative content," she added. "Material which additionally commodifies victims' trauma, and renders children, especially female children, less safe on and off line."

Support Session Data

The children's helpline also released information of support sessions where AI has been referenced. AI-related harms mentioned in the sessions comprise:

  • Using AI to evaluate weight, body and looks
  • AI assistants dissuading children from talking to trusted adults about abuse
  • Being bullied online with AI-generated material
  • Online blackmail using AI-manipulated images

During April and September this year, the helpline delivered 367 support sessions where AI, conversational AI and associated topics were mentioned, significantly more as many as in the same period last year.

Fifty percent of the references of AI in the 2025 sessions were related to mental health and wellbeing, including utilizing chatbots for support and AI therapy apps.

Carrie Hunter
Carrie Hunter

Eleanor Vance is a tech enthusiast and writer specializing in Windows OS and software, sharing practical advice for everyday users.