Apple accused of vastly undercounting child sex abuse images — here’s Apple’s response
UK watchdog highlights 'concerning discrepancy' in abuses on Apple's platforms vs. what the company reports
Apple is reportedly not doing enough to protect its most vulnerable users, child safety experts allege, by underreporting the prevalence of child sexual abuse material (CSAM) exchanged and stored on its services like iCloud, iMessage, and FaceTime.
The National Society for the Prevention of Cruelty to Children (NSPCC), a child protection charity in the UK, says it gathered data through freedom of information requests that implicates Apple in hundreds of CSAM incidents in England and Wales more than the company officially reported globally in a year.
The NSPCC found that "Apple was implicated in 337 recorded offenses of child abuse images between April 2022 and March 2023 in England and Wales." But across its platforms worldwide in 2023, Apple only reported 267 instances of CSAM to the National Center for Missing & Exploited Children (NCMEC), The Guardian reports.
That's a steep drop-off comparison to other tech giants like Google and Meta, which reported more than 1.47 million and 30.6 million last year, respectively, based on the NCMEC's annual report. Other platforms that reported more potential CSAM cases than Apple in 2023 include Discord (339,412), Pinterest (52,356) and 4chan (1,657). For reference, the NCMEC requires every tech company based out of the U.S. to pass along any possible CSAM cases detected on their platforms. It then forwards these cases to relevant law enforcement agencies worldwide.
While Apple services such as iMessage, FaceTime and iCloud all feature end-to-end encryption, which means only the sender and recipient of a message can see its contents, that doesn't fully explain why Apple is such an outlier. As the NSPCC points out, WhatsApp also uses end-to-end encryption, and it reported nearly 1.4 million cases of suspected CSAM to NCMEC in 2023.
Apple 'clearly behind' competitors in cracking down on CSAM
The NSPCC’s head of child safety online policy Richard Collard told The Guardian there is "a concerning discrepancy between the number of UK child abuse image crimes taking place on Apple’s services and the almost negligible number of global reports of abuse content they make to authorities."
“Apple is clearly behind many of their peers in tackling child sexual abuse when all tech firms should be investing in safety and preparing for the roll out of the Online Safety Act in the UK," he continued. Apple refused The Guardian's request for comment on the NSPCC's report.
Sign up to get the BEST of Tom's Guide direct to your inbox.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
The accusation comes after years of controversies over Apple's plans to improve monitoring on its platforms to uncover child sexual abuse materials. After announcing Apple's child safety toolset — which would scan iOS devices for images of child abuse — in August 2021, the company paused its efforts just one month later as digital rights groups raised concerns that its surveillance capabilities could threaten the privacy and security of iCloud users around the world. In 2022, Apple announced it was killing the project.
Apple responds to controversy
When asked about the NSPCC's report, Apple pointed us to a letter to child safety group Heat Initiative posted by Wired. After Apple announced plans to kill its iCloud photo-scanning tool, Heat Initiative organized a campaign to push Apple to crack down on CSAM and offer more tools for users to report CSAM, leading Apple to issue a rare response. In its response, Apple said it shifted away from the scanning feature to focus on developing a set of on-device tools to better directly connect users with local resources and law enforcement.
“Child sexual abuse material is abhorrent and we are committed to breaking the chain of coercion and influence that makes children susceptible to it,” Apple's director of user privacy and child safety Erik Neuenschwander wrote in the company's response to Heat Initiative (which you can read in full here). "We’re proud of the contributions we have made so far and intend to continue working collaboratively with child safety organizations, technologists, and governments on enduring solutions that help protect the most vulnerable members of our society.
"Our goal has been and always will be to create technology that empowers and enriches people’s lives, while helping them stay safe. With respect to helping kids stay safe, we have made meaningful contributions toward this goal by developing a number of innovative technologies. We have deepened our commitment to the Communication Safety feature that we first made available in December 2021. Communication Safety is designed to intervene and offer helpful resources to children when they receive or attempt to send messages that contain nudity. The goal is to disrupt grooming of children by making it harder for predators to normalize this behavior.
"In our latest releases, we’ve expanded the feature to more easily and more broadly protect children. First, the feature is on by default for all child accounts. Second, it is expanded to also cover video content in addition to still images. And we have expanded these protections in more areas across the system including AirDrop, the Photo picker, FaceTime messages, and Contact Posters in the Phone app. In addition, a new Sensitive Content Warning feature helps all users avoid seeing unwanted nude images and videos when receiving them in Messages, an AirDrop, a FaceTime video message, and the Phone app when receiving a Contact Poster. To expand these protections beyond our built-in capabilities, we have also made them available to third parties. Developers of communication apps are actively incorporating this advanced technology into their products. These features all use privacy-preserving technology — all image and video processing occurs on device, meaning Apple does not get access to the content. We intend to continue investing in these kinds of innovative technologies because we believe it’s the right thing to do.
"...We decided to not proceed with the proposal for a hybrid client-server approach to CSAM detection for iCloud Photos from a few years ago, for a number of good reasons. After having consulted extensively with child safety advocates, human rights organizations, privacy and security technologists, and academics, and having considered scanning technology from virtually every angle, we concluded it was not practically possible to implement without ultimately imperiling the security and privacy of our users.
"Scanning of personal data in the cloud is regularly used by companies to monetize the information of their users. While some companies have justified those practices, we’ve chosen a very different path — one that prioritizes the security and privacy of our users. Scanning every user’s privately stored iCloud content would in our estimation pose serious unintended consequences for our users. Threats to user data are undeniably growing — globally the total number of data breaches more than tripled between 2013 and 2021, exposing 1.1 billion personal records in 2021 alone. As threats become increasingly sophisticated, we are committed to providing our users with the best data security in the world, and we constantly identify and mitigate emerging threats to users’ personal data, on device and in the cloud. Scanning every user’s privately stored iCloud data would create new threat vectors for data thieves to find and exploit.
"It would also inject the potential for a slippery slope of unintended consequences. Scanning for one type of content, for instance, opens the door for bulk surveillance and could create a desire to search other encrypted messaging systems across content types (such as images, videos, text, or audio) and content categories. How can users be assured that a tool for one type of surveillance has not been reconfigured to surveil for other content such as political activity or religious persecution? Tools of mass surveillance have widespread negative implications for freedom of speech and, by extension, democracy as a whole. Also, designing this technology for one government could require applications for other countries across new data types.
"Scanning systems are also not foolproof and there is documented evidence from other platforms that innocent parties have been swept into dystopian dragnets that have made them victims when they have done nothing more than share perfectly normal and appropriate pictures of their babies.
"We firmly believe that there is much good that we can do when we work together and collaboratively. As we have done in the past, we would be happy to meet with you to continue our conversation about these important issues and how to balance the different equities we have outlined above. We remain interested, for instance, in working with the child safety community on efforts like finding ways we can help streamline user reports to law enforcement, growing the adoption of child safety tools, and developing new shared resources between companies to fight grooming and exploitation."
More from Tom's Guide
Alyse Stanley is a news editor at Tom’s Guide overseeing weekend coverage and writing about the latest in tech, gaming and entertainment. Prior to joining Tom’s Guide, Alyse worked as an editor for the Washington Post’s sunsetted video game section, Launcher. She previously led Gizmodo’s weekend news desk, where she covered breaking tech news — everything from the latest spec rumors and gadget launches to social media policy and cybersecurity threats. She has also written game reviews and features as a freelance reporter for outlets like Polygon, Unwinnable, and Rock, Paper, Shotgun. She’s a big fan of horror movies, cartoons, and miniature painting.