British Technology Firms and Child Protection Officials to Test AI's Capability to Create Abuse Content

Tech firms and child safety organizations will receive authority to assess whether AI systems can generate child abuse material under new UK legislation.

Significant Increase in AI-Generated Illegal Material

The announcement coincided with findings from a safety watchdog showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.

New Legal Framework

Under the changes, the government will allow designated AI developers and child protection organizations to examine AI systems – the foundational technology for conversational AI and image generators – and verify they have sufficient safeguards to prevent them from producing depictions of child sexual abuse.

"Fundamentally about preventing exploitation before it happens," declared the minister for AI and online safety, noting: "Specialists, under rigorous protocols, can now identify the danger in AI systems promptly."

Tackling Regulatory Challenges

The changes have been implemented because it is against the law to create and own CSAM, meaning that AI developers and other parties cannot create such images as part of a evaluation process. Until now, authorities had to wait until AI-generated CSAM was published online before addressing it.

This legislation is designed to preventing that problem by enabling to halt the creation of those materials at source.

Legal Framework

The changes are being introduced by the government as revisions to the criminal justice legislation, which is also implementing a ban on possessing, creating or sharing AI systems developed to create exploitative content.

Practical Impact

This recently, the minister toured the London base of a children's helpline and listened to a mock-up conversation to counsellors featuring a report of AI-based exploitation. The call depicted a teenager seeking help after being blackmailed using a sexualised deepfake of themselves, created using AI.

"When I hear about children facing blackmail online, it is a cause of extreme frustration in me and justified anger amongst families," he said.

Concerning Data

A leading online safety organization stated that cases of AI-generated abuse material – such as online pages that may include multiple images – had significantly increased so far this year.

Cases of the most severe content – the most serious form of abuse – increased from 2,621 images or videos to 3,086.

  • Girls were predominantly victimized, making up 94% of illegal AI depictions in 2025
  • Depictions of infants to two-year-olds increased from five in 2024 to 92 in 2025

Industry Response

The law change could "constitute a vital step to guarantee AI products are safe before they are released," stated the head of the online safety organization.

"Artificial intelligence systems have made it so victims can be victimised repeatedly with just a few clicks, giving offenders the capability to make possibly limitless quantities of advanced, lifelike exploitative content," she continued. "Material which additionally commodifies victims' suffering, and makes young people, particularly female children, less safe on and off line."

Support Interaction Information

Childline also released details of counselling interactions where AI has been referenced. AI-related harms mentioned in the conversations comprise:

  • Using AI to evaluate body size, physique and appearance
  • AI assistants discouraging children from consulting trusted guardians about harm
  • Being bullied online with AI-generated content
  • Digital extortion using AI-manipulated pictures

Between April and September this year, the helpline delivered 367 counselling sessions where AI, chatbots and associated topics were mentioned, significantly more as many as in the equivalent timeframe last year.

Fifty percent of the mentions of AI in the 2025 interactions were connected with psychological wellbeing and wellness, encompassing using chatbots for support and AI therapy apps.

Vincent Jackson
Vincent Jackson

Lena is a digital strategist and gaming enthusiast with over a decade of experience in media innovation.