Shorouk Express
Creators of AI-generated images of child sexual abuse will face up to five years in prison in a new government crackdown.
Home Secretary Yvette Cooper has announced the UK will be the first country in the world to make it illegal to own artificial intelligence tools designed to make images of child sexual abuse.
The new offence will be punishable by up to five years in prison.
Under measures the government will bring forward in the Crime and Policing Bill, those who have been found to own AI “paedophile manuals” could be jailed for up to three years.
“Nudeifying” real-life images of children, or stitching their faces on to existing images of abuse, are among the ways AI is being used by abusers, the Government department said.
Fake images are also being used to blackmail children and force them to livestream further abuse.
Ministers believe that the online abuse can lead viewers to go out and offend in real life.
Ms Cooper said: “We know that sick predators’ activities online often lead to them carrying out the most horrific abuse in person.
“This Government will not hesitate to act to ensure the safety of children online by ensuring our laws keep pace with the latest threats.”
The Bill will also introduce a specific offence for paedophiles who run websites to share child sex abuse, which could carry a 10-year prison sentence.
The Border Force will be given new powers to prevent the spread of child sexual abuse images from abroad, including by allowing officers to call for individuals suspected of posing a risk to children to give up their phones for inspection.
The Home Secretary added: “These four new laws are bold measures designed to keep our children safe online as technologies evolve.”
The law reforms come after warnings by the Internet Watch Foundation (IWF) that more and more sexual abuse images of children are being created.
The charity’s latest data shows reports of AI-generated child sexual abuse images have risen by 380%, with 245 confirmed reports in 2024, compared with 51 in 2023.
Each of these reports can contain thousands of images.
Some of the AI-generated content is so realistic that it is sometimes difficult to tell the difference between what is real abuse and what is fake, the charity said.
Derek Ray-Hill, interim chief executive of the IWF, said the steps “will have a concrete impact on online safety”.
He added: “The frightening speed with which AI imagery has become indistinguishable from photographic abuse has shown the need for legislation to keep pace with new technologies.
“Children who have suffered sexual abuse in the past are now being made victims all over again, with images of their abuse being commodified to train AI models.
“It is a nightmare scenario and any child can now be made a victim, with life-like images of them being sexually abused obtainable with only a few prompts, and a few clicks.”