icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
9 Aug, 2021 18:22

Apple releases FAQ downplaying privacy concerns over new ‘child protection system’ as watchdogs warn of overreach

Apple releases FAQ downplaying privacy concerns over new ‘child protection system’ as watchdogs warn of overreach

Apple pushed back against criticism that its new anti-child sexual abuse detection system could be used for “backdoor” surveillance. The company insisted it won’t “accede to any government’s request to expand” the system’s scope.

The new plan, announced last week, includes a feature that identifies and blurs sexually explicit images received by children using Apple’s ‘Messages’ app – and another feature which notifies the company if it detects any Child Sexual Abuse Material (CSAM) in the iCloud.

The announcement sparked instant backlash from digital privacy groups, who said it “introduces a backdoor” into the company’s software that “threatens to undermine fundamental privacy protections” for users, under the guise of child protection. 

Also on rt.com Snowden joins battle against iPhone photo-scanning plan as Apple insults privacy activists as ‘screeching voices of the minority’

In an open letter posted on GitHub and signed by security experts, including former NSA whistleblower Edward Snowden, the groups condemned the “privacy-invasive content scanning technology” and warned that the features have the “potential to bypass any end-to-end encryption.”

After an internal memo reportedly referred to the criticism as the “screeching voices of the minority,” Apple on Monday released an FAQ about its ‘Expanded Protections for Children’ system, saying it was designed to apply only to images uploaded to iCloud and not the “private iPhone phone library.” It also will not affect users who have iCloud Photos disabled.

The system, it adds, only works with CSAM image hashtags provided by the National Center for Missing and Exploited Children (NCMEC) and “there is no automated reporting to law enforcement, and Apple conducts human review before making a report to NCMEC.”

‘Image hashtags’ refers to the use of algorithms to assign a unique ‘hash value’ to an image – which has been likened to a ‘digital fingerprint’ making it easier for all platforms to remove content deemed harmful.

While Apple insists it screens for image hashes “validated to be CSAM” by child safety organizations, digital rights watchdog Electronic Frontier Foundation (EEF) had previously warned that this would lead to “mission creep” and “overreach.”

Also on rt.com Apple to scan photos on all US iPhones for ‘child abuse imagery’ as researchers warn of impending ‘1984’ – reports

“One of the technologies originally built to scan and hash child sexual abuse imagery has been repurposed to create a database of “terrorist” content that companies can contribute to and access for the purpose of banning such content,” the non-profit warned last week, referring to the Global Internet Forum to Counter Terrorism (GIFCT).

Apple countered that, because it “does not add to the set of known CSAM image hashes,” and because the “same set of hashes” are stored in the OS of every iPhone and iPad users, it is “not possible” to use the system to target users by “injecting” non-CSAM images into it.

“Let us be clear, this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government’s request to expand it,” the company vows in its FAQ.

“We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future,” it added.

However, the company has already been criticized for using “misleading phrasing” to avoid explaining the potential for “false positives” in the system – the “likelihood” of which Apple claims is “less than one in one trillion [incorrectly flagged accounts] per year”.

Like this story? Share it with a friend!

Podcasts
0:00
28:37
0:00
26:42