British Technology Firms and Child Safety Agencies to Examine AI's Ability to Create Exploitation Content

Tech firms and child safety agencies will be granted permission to evaluate whether AI systems can produce child exploitation material under new UK laws.

Significant Rise in AI-Generated Illegal Content

The announcement came as findings from a safety watchdog showing that reports of AI-generated CSAM have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.

New Legal Structure

Under the amendments, the authorities will permit designated AI companies and child safety groups to inspect AI systems – the foundational technology for conversational AI and visual AI tools – and verify they have adequate protective measures to prevent them from creating images of child exploitation.

"Ultimately about stopping abuse before it happens," stated Kanishka Narayan, noting: "Experts, under rigorous conditions, can now identify the risk in AI models early."

Tackling Legal Obstacles

The changes have been implemented because it is illegal to produce and own CSAM, meaning that AI creators and others cannot create such images as part of a testing process. Previously, officials had to delay action until AI-generated CSAM was published online before dealing with it.

This legislation is designed to preventing that issue by helping to halt the creation of those materials at their origin.

Legal Framework

The changes are being introduced by the government as modifications to the crime and policing bill, which is also implementing a ban on owning, producing or sharing AI systems developed to create child sexual abuse material.

Practical Consequences

This week, the official visited the London base of a children's helpline and heard a mock-up conversation to counsellors featuring a account of AI-based abuse. The interaction depicted a adolescent requesting help after being blackmailed using a sexualised AI-generated image of themselves, constructed using AI.

"When I learn about children facing blackmail online, it is a cause of extreme anger in me and rightful anger amongst families," he stated.

Alarming Statistics

A prominent online safety foundation reported that cases of AI-generated exploitation content – such as online pages that may contain multiple images – had significantly increased so far this year.

Cases of category A content – the gravest form of abuse – rose from 2,621 visual files to 3,086.

  • Female children were overwhelmingly targeted, making up 94% of prohibited AI images in 2025
  • Depictions of infants to toddlers rose from five in 2024 to 92 in 2025

Sector Reaction

The law change could "represent a vital step to guarantee AI products are secure before they are released," commented the head of the internet monitoring organization.

"AI tools have made it so victims can be victimised all over again with just a simple actions, providing criminals the ability to create potentially endless quantities of advanced, lifelike child sexual abuse material," she continued. "Material which additionally commodifies survivors' trauma, and makes young people, especially female children, more vulnerable both online and offline."

Support Session Data

Childline also released information of support sessions where AI has been mentioned. AI-related harms mentioned in the conversations comprise:

  • Using AI to rate weight, physique and appearance
  • Chatbots discouraging young people from talking to safe adults about abuse
  • Facing harassment online with AI-generated content
  • Online blackmail using AI-faked pictures

During April and September this year, Childline delivered 367 support sessions where AI, conversational AI and associated topics were discussed, significantly more as many as in the same period last year.

Half of the references of AI in the 2025 interactions were related to psychological wellbeing and wellbeing, encompassing utilizing chatbots for support and AI therapy apps.

Beverly Bowen
Beverly Bowen

A poet and storyteller weaving emotions into words, inspired by nature and human experiences.