Apple’s Child Sexual Abuse Detection System Stirs Privacy Debate

Apple's Child Sexual Abuse Detection System

Users claim that the Child Sexual Abuse Material (CSAM) detection system compromises users’ privacy and allows censorship, surveillance and oppression.

On August 6, 2021, Apple announced the launch of a new feature called the Child Sexual Abuse Material (CSAM) detection system. While the feature is a step in the right direction, users are not happy with what it entails. It will allow Apple to scan all the photos uploaded on a user’s iCloud. Then it will flag the images that look similar to the child sexual abuse images in the company’s database. In case a user has enabled end-to-end encryption, Apple will scan their iMessages. Once they collect at least 30 suspicious photos from a single user, they will cross-check with their database and report it to the National Center for Missing and Exploited Children (NCMEC) for further action.

Two Princeton academics Jonathan Mayer and Anunay Kulshrestha wrote the only peer-reviewed publication on how one can build a system like Apple’s CSAM. They concluded that the technology was dangerous. According to them, with a system like this, people could create “false positives and malicious users could game the system to subject innocent users to scrutiny.” They warned people against using their own system design. Though the feature has positive intentions, its approach – accessing users’ messages and tracking their images – is worrying iPhone users. The CEO of Whatsapp Will Cathcart also expressed his displeasure with the move. In a tweet, he called it a “wrong approach and a setback for people’s privacy all over the world.”

In light of this, over 90 policy groups from around the world signed an open letter asking Apple to drop this plan. The letter noted, “Though these capabilities are intended to protect children and to reduce the spread of child sexual abuse material (CSAM), we are concerned that they will be used to censor protected speech, threaten the privacy and security of people around the world, and have disastrous consequences for many children.”

Additionally, people are concerned that by creating such a system, Apple has opened the door for governments wanting to regulate citizens under the pretext of keeping them safe. The Cybersecurity Director at the Electronic Frontier Foundation Eva Galperin told the New York Times, “Once you build a system that can be aimed at any database, you will be asked to aim the system at a database.” This concern is well-founded given that Apple agreed to shift the personal data of its Chinese users to the servers of a state-owned firm, as per the government’s request. Still, Apple released a document saying that they will not succumb to any government pressures to abuse this system. They said, “Let us be clear, this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government’s request to expand it.”

Apple’s Senior Vice President of Software Engineering Craig Federighi has chalked these concerns up to “misunderstandings”. In an interview, he explained, “It’s really clear a lot of messages got jumbled pretty badly in terms of how things were understood. We wish that this would’ve come out a little more clearly for everyone because we feel very positive and strongly about what we’re doing.”

Despite widespread requests to not go ahead with this, Apple maintains that their system is “much more private than anything that’s been done in this area before.” Apple will launch the CSAM detection system later this year.

Header Image by Unsplash


Share on facebook
Share on twitter
Share on linkedin
Share on email


5 Space Technology Startups to Watch in 2023

Beyond SpaceX: 5 Space Technology Startups to Watch in 2023

As we venture further into the galaxy, the sky isn’t the limit anymore when it comes to innovation in the world of space technology. From satellites that are as small as a lunchbox to rockets that could potentially colonize Mars, companies are truly pushing the envelope in this ever-expanding field.

Sophos Unveils Scam Tactics Costing Users Thousands of Dollars

Beware of Fake ChatGPT Apps: Sophos Unveils Scam Tactics Costing Users Thousands of Dollars

Sophos, a global cybersecurity company, has uncovered several apps pretending to be legitimate ChatGPT-based chatbots. These apps overcharge users, generating thousands of dollars each month. According to Sophos X-Ops’ latest report titled “FleeceGPT’ Mobile Apps Target AI-Curious to Rake in Cash”, these deceptive apps have appeared on both Google Play and the Apple App Store.

Essential Privacy Tools to Safeguard Your Devices

Essential Privacy Tools to Safeguard Your Devices

In the ever-expanding digital era, protecting sensitive information and ensuring data security has become paramount. According to a recent study conducted by MAGNA’s Media Trials unit and data governance platform Ketch, a staggering 74 percent of individuals say that data privacy is one of their top concerns.

4 Reasons Why India's EV Industry is Poised for Rapid Growth

4 Reasons Why India’s EV Industry is Poised for Rapid Growth

Hold on to your seats, because India’s electric vehicle (EV) industry is not just gaining speed—it’s shifting gears faster than a Tesla Roadster on Ludicrous mode. EVs are no longer just futuristic fantasies; they’re already ruling the roads of Delhi and zooming past their counterparts fueled by fossil fuels on the highways of Mumbai.

Microsoft-backed Secures Over US$250 Million in Series D Funding

Microsoft-backed Secures Over US$250 Million in Series D Funding

London-based artificial intelligence (AI)-powered composable software platform has raised a significant investment of over US$250 million in Series D funding. Led by Qatar Investment Authority (QIA), the funding round brings the total amount raised by the company to over US$450 million, resulting in a valuation increase of up to 1.8x.