Security

Civil liberty organisations slam Apple child safety tech – Security


Apple’s recently announced client-side scanning of images on users’ devices and in its iCloud storage, to catch explicit and child abuse material on them, is being labelled “dangerous”. 

While lauding the goal of protecting minors as important and worthy, the Centre for Democracy and Technology civil liberties organisation in the United States said it is deeply concerned that Apple’s changes create new risks to children and all users.

“Apple is replacing its industry-standard end-to-end encrypted messaging system with an infrastructure for surveillance and censorship, which will be vulnerable to abuse and scope-creep not only in the US, but around the world,” says Greg Nojeim, of CDT’s Security and Surveillance Project.

“Apple should abandon these changes and restore its users’ faith in the security and integrity of their data on Apple devices and services,” Nojeim said.

To be rolled out first in the United States, the technology has three main parts.

Apple will add NeuralHash technology to iOS and iPadOS 15, as well as watchOS 8 and macOS Monterey, which analyses images and generates unique numbers for them, so-called hashes.

This process takes place on users’ devices, with image hashes being matched against a set of known child abuse sexual material (CSAM) without revealing the result.

Using private set intersection multiparty computations, Apple says it can determine if a hash matches that of known CSAM material, without learning anything about image hashes that don’t match.

Cryptographic safety vouchers that encode the match result, the images NeuralHash and a visual derivatiive, are created on-device.

Once a specific threshold of safety vouchers is exceded, Apple will manually review their content to verify that there is a match.

See also  SOS Ltd. Announces its Addition of Insurance Brokerage Business

“The threshold is set to provide an extremely high level of accuracy that accounts are not incorrectly flagged,” Apple said in its technical paper describing the child safety technologies.

If there is a match, the user’s account will be disabled and a report sent to the US National Centre for Missing and Exploited Children (NCMEC) which collaborates with law enforcement agencies.

Apple did not say how the system will work with newly generated CSAM that does not have existing hashes, or if NeuralHas will work on older devices as well as newer ones.

As part of Apple’s parental controls system Screen Time, on-device machine learning will be used to identify sensitive content in the end-to-end encrypted Messages app.

While parents who have enabled the Screen Time feature for their children may be notified about sensitive content, Apple won’t be able to read such communications.

The Electronic Frontier Foundation civil liberties organisation said this change breaks end-to-end encryption for Messages, and amounts to a priviacy busting backdoor on users’ devices.

“… This system will give parents who do not have the best interests of their children in mind one more way to monitor and control them, limiting the internet’s potential for expanding the world of those whose lives would otherwise be restricted,” EFF said.

See also  Vaccine passport efforts draw opposition from GOP lawmakers

The Siri personal assistant and on-device Search function will get information added to them if parents and children encounter unsafe situations, and be able to intervene if users search for CSAM related topics.





READ SOURCE

Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.