Users claim that the Child Sexual Abuse Material (CSAM) detection system compromises users’ privacy and allows censorship, surveillance and oppression.
On August 6, 2021, Apple announced the launch of a new feature called the Child Sexual Abuse Material (CSAM) detection system. While the feature is a step in the right direction, users are not happy with what it entails. It will allow Apple to scan all the photos uploaded on a user’s iCloud. Then it will flag the images that look similar to the child sexual abuse images in the company’s database. In case a user has enabled end-to-end encryption, Apple will scan their iMessages. Once they collect at least 30 suspicious photos from a single user, they will cross-check with their database and report it to the National Center for Missing and Exploited Children (NCMEC) for further action.
Two Princeton academics Jonathan Mayer and Anunay Kulshrestha wrote the only peer-reviewed publication on how one can build a system like Apple’s CSAM. They concluded that the technology was dangerous. According to them, with a system like this, people could create “false positives and malicious users could game the system to subject innocent users to scrutiny.” They warned people against using their own system design. Though the feature has positive intentions, its approach – accessing users’ messages and tracking their images – is worrying iPhone users. The CEO of Whatsapp Will Cathcart also expressed his displeasure with the move. In a tweet, he called it a “wrong approach and a setback for people’s privacy all over the world.”
In light of this, over 90 policy groups from around the world signed an open letter asking Apple to drop this plan. The letter noted, “Though these capabilities are intended to protect children and to reduce the spread of child sexual abuse material (CSAM), we are concerned that they will be used to censor protected speech, threaten the privacy and security of people around the world, and have disastrous consequences for many children.”
Additionally, people are concerned that by creating such a system, Apple has opened the door for governments wanting to regulate citizens under the pretext of keeping them safe. The Cybersecurity Director at the Electronic Frontier Foundation Eva Galperin told the New York Times, “Once you build a system that can be aimed at any database, you will be asked to aim the system at a database.” This concern is well-founded given that Apple agreed to shift the personal data of its Chinese users to the servers of a state-owned firm, as per the government’s request. Still, Apple released a document saying that they will not succumb to any government pressures to abuse this system. They said, “Let us be clear, this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government’s request to expand it.”
Apple’s Senior Vice President of Software Engineering Craig Federighi has chalked these concerns up to “misunderstandings”. In an interview, he explained, “It’s really clear a lot of messages got jumbled pretty badly in terms of how things were understood. We wish that this would’ve come out a little more clearly for everyone because we feel very positive and strongly about what we’re doing.”
Despite widespread requests to not go ahead with this, Apple maintains that their system is “much more private than anything that’s been done in this area before.” Apple will launch the CSAM detection system later this year.
Header Image by Unsplash