Three-year-olds groomed online, Internet Watch Foundation warns

Image source, Getty Images

  • Author, Chris Vallance
  • Role, Technology reporter

Sexual predators are grooming children under six into performing “disturbing” acts of sexual abuse via phones or webcams a charity has warned.

The Internet Watch Foundation (IWF) said it had discovered more than two thousand remotely filmed child abuse images of three to six-year-olds online in 2023.

Responding to the report Security Minister Tom Tugendhat urged tech firms to do more to prevent abuse.

He also called on parents “to speak to your children about their use of social media, because the platforms you presume safe may pose a risk”.

The IWF is a charity which helps detect and remove child sexual abuse imagery online.

New analysis published in its latest annual report revealed it had discovered 2401 images of children aged three to six in so-called “self-generated” images on the open internet in 2023.

“Self-generated” images are where a child is persuaded, coerced or tricked by a predator into carrying out acts via a webcam or handheld device.

Nearly one in seven images were category A – the rating it assigns to those featuring the most serious abuse. Six in 10 images showed “sexual posing with nudity” it suggested.

Analysts who reviewed some of the images found many, “were are taken at home in children’s bedrooms or in family bathrooms when the child is alone or with another child such as a sibling or friend”.

Sometimes children were “completely unaware” they were being recorded.

IWF analysts were confident that someone else was directing what happened as three to six-year-old children “are sexually naive and would not normally be aware of the possibility of this type of sexual behaviour,” it said.

The report also notes that online child abuse material is more “extreme” with the charity reporting a 22% increase in category A imagery.

Artificial intelligence

Ofcom’s research suggested a third of parents whose five to seven-year-olds browse social media are allowed to do so alone.

Ian Critchley, who leads on child protection for the National Police Chiefs’ Council, said protecting young children was not just the responsibility of parents and carers. “The biggest change” needed to come from the tech companies and online platforms, he said.

As part of its work implementing the new Online Safety Act the communications watchdog has said it will consult on how automated tools, including AI, can be used to “proactively detect” illegal content – including child sexual abuse material.

But the IWF is calling for swift action and argues technology firms should not wait.

“The harms are happening to children now, and our response must be immediate,” its chief executive Susie Hargreaves said.

AI is already used by some big tech firms to help identify content that violates its terms, including child abuse material.

This is principally used to help identify material that is then reviewed by human moderators.

But experts warn AI alone is not a panacea. Professor Alan Woodward of the University of Surrey told the BBC: “AI may prove useful in helping with the scale of the data being analysed but at its current state of development it shouldn’t be considered a complete solution.”


This website uses cookies. By continuing to use this site, you accept our use of cookies.