Artificial Intelligence

Paedophiles create nude AI images of children to extort from them, says charity | Internet safety


Internet safety

Internet Watch Foundation has found a manual on dark web encouraging criminals to use software tools that remove clothing

Tue 23 Apr 2024 01.01 CEST

Paedophiles are being urged to use artificial intelligence to create nude images of children to extort more extreme material from them, according to a child abuse charity.

The Internet Watch Foundation (IWF) said a manual found on the dark web contained a section encouraging criminals to use “nudifying” tools to remove clothing from underwear shots sent by a child. The manipulated image could then be used against the child to blackmail them into sending more graphic content, the IWF said.

“This is the first evidence we have seen that perpetrators are advising and encouraging each other to use AI technology for these ends,” said the IWF.

The charity, which finds and removes child sexual abuse material online, warned last year of a rise in extortion cases where victims are manipulated into sending graphic images of themselves and are then threatened with the release of those images unless they hand over money. It also flagged the first examples of AI being used to create “astoundingly realistic” abuse content.

The anonymous author of the online manual, which runs to nearly 200 pages, boasts about having “successfully blackmailed” 13-year-old girls into sending nude imagery online. The IWF said the document had been passed to the UK’s National Crime Agency.

Last month the Guardian revealed that the Labour party was considering a ban on nudification tools that allow users to create images of people without their clothes on.

The IWF has also said 2023 was “the most extreme year on record”. Its annual report said the organisation found more than 275,000 webpages containing child sexual abuse last year, the highest number recorded by the IWF, with a record amount of “category A” material, which can include the most severe imagery including rape, sadism and bestiality. The IWF said more than 62,000 pages contained category A content, compared with 51,000 in the prior year.

The IWF found 2,401 images of self-generated child sexual abuse material – where victims are manipulated or threatened into recording abuse of themselves – taken by children aged between three and six years old. Analysts said they had seen abuse taking place in domestic settings including bedrooms and kitchens.

Susie Hargreaves, the chief executive of the IWF, said opportunistic criminals trying to manipulate children were “not a distant threat”. She said: “If children under six are being targeted like this, we need to be having age-appropriate conversations now to make sure they know how to spot the dangers.”

Hargreaves added that the Online Safety Act, which became law last year and imposes a duty of care on social media companies to protect children, “needs to work”.

Tom Tugendhat, the security minister, said parents should talk to their children about using social media. “The platforms you presume safe may pose a risk,” he said, adding that tech companies should introduce stronger safeguards to prevent abuse.

According to research published last week by the communications regulator, Ofcom, a quarter of three- to four-year-olds own a mobile phone and half of under-13s are on social media. The government is preparing to launch a consultation in the coming weeks that will include proposals to ban the sale of smartphones to under-16s and raise the minimum age for social media sites from 13 to as high as 16.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.