Apple: It takes 30 child abuse images to trigger a warning

iCloud
(Image credit: Shutterstock)

It’s been a confusing several days since Apple first announced its intention to scan photos uploaded to iCloud for known images of Child Sexual Abuse Material (CSAM)

Privacy advocates have objected in strong terms to the move which would see scanning performed on the hardware itself, before being uploaded to iCloud. To confuse things further, Apple said in its FAQ [PDF] that this functionality would essentially be disabled if users chose not to use iCloud. The move, privacy campaigners fear, could lead to pressure from authoritarian governments for Apple to expand the functionality to help crack down on dissident activity. 

In a bid to take the sting out of the controversy, Apple has issued some clarifications. As Reuters reports, Apple now says that its scanner will only hunt for CSAM images flagged by clearinghouses in multiple countries, and that it would be simple for researchers to check that the image identifiers are universal across devices, to prove that it couldn’t be adapted to target individuals. 

The company also added that it would take 30 matched CSAM images before the system prompts Apple for a human review, and any official report could be filed. This, in part, explains why Apple felt it could promise the chance of a false positive being less than one in a trillion per year.

Apple refused to say whether these were adjustments made in the face of criticism or specifics that were always in place, though it did add that as a policy still in development, change should be expected. 

Nonetheless, privacy advocates believe they’re making a difference. “Even if they don't ultimately nix the plan, we're forcing them to do the work they should've done by consulting us all along,” tweeted Stanford University surveillance researcher Riana Pfefferkorn. “Keep pushing.”

Most recently, Apple VP of software engineering Craig Federighi told the Wall Street Journal that Apple’s new policies are “much more private than anything that's been done in this area before.”

“We, who consider ourselves absolutely leading on privacy, see what we are doing here as an advancement of the state of the art in privacy, as enabling a more private world,” he said. Adding that the system had been developed “in the most privacy-protecting way we can imagine and in the most auditable and verifiable way possible,” he painted the company’s solution as preferable to its cloud storage rivals, which look and analyze “every single photo.”

Federighi argued that critics don’t fully understand Apple’s implementation, and believes that the company is partly to blame for not explaining things clearly. Announcing CSAM scanning at the same time as its protections for minors using iMessage meant the two were erroneously conflated, he conceded.

"We wish that this would've come out a little more clearly for everyone because we feel very positive and strongly about what we're doing,” he said.

The “we” in that sentence may imply more uniform support within the company than is actually present. On Friday, Reuters revealed that the move had proved equally divisive within the company, with more than 800 messages about the plan appearing on the company’s internal Slack.   

TOPICS
Alan Martin

Freelance contributor Alan has been writing about tech for over a decade, covering phones, drones and everything in between. Previously Deputy Editor of tech site Alphr, his words are found all over the web and in the occasional magazine too. When not weighing up the pros and cons of the latest smartwatch, you'll probably find him tackling his ever-growing games backlog. Or, more likely, playing Spelunky for the millionth time.