icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
8 Sep, 2021 14:32

Facebook pays contractors to read your ‘encrypted’ WhatsApp messages, shares info with prosecutors – reports

Facebook pays contractors to read your ‘encrypted’ WhatsApp messages, shares info with prosecutors – reports

When Facebook acquired WhatsApp, it promised to respect the privacy of its users. That hasn’t been the case, and the firm now employs thousands of staff to read supposedly-encrypted chats.

Social media behemoth Facebook acquired WhatsApp in 2014, with CEO Mark Zuckerberg promising to keep the stripped-down, ad-free messaging app “exactly the same.” End-to-end encryption was introduced in 2016, with the app itself offering on-screen assurances to users that “No one outside of this chat” can read their communications, and Zuckerberg himself telling the US Senate in 2018 that “We don’t see any of the content in WhatsApp.”

Also on rt.com Irish data privacy watchdog dishes out record €225mn fine to WhatsApp

Allegedly, none of that is true. More than a thousand content moderators are employed at shared Facebook/WhatsApp offices in Austin, Texas, Dublin, Ireland, and Singapore to sift through messages reported by users and flagged by artificial intelligence.

Based on internal documents, interviews with moderators, and a whistleblower complaint, ProPublica explained how the system works in a lengthy investigation published on Wednesday.

When a user presses ‘report’ on a message, the message itself plus the preceding four messages in the chat are unscrambled and sent to one of these moderators for review. Moderators also examine messages picked out by artificial intelligence, based on unencrypted data collected by WhatsApp. The data collected by the app is extensive, and includes:

“The names and profile images of a user’s WhatsApp groups as well as their phone number, profile photo, status message, phone battery level, language and time zone, unique mobile phone ID and IP address, wireless signal strength and phone operating system, as a list of their electronic devices, any related Facebook and Instagram accounts, the last time they used the app and any previous history of violations.”

Also on rt.com Russia fines US tech giants Facebook, Twitter & WhatsApp nearly half million dollars over refusal to stop sending user data abroad

These moderators are not employees of WhatsApp or Facebook. Instead they are contractors working for $16.50 per hour, hired by consulting firm Accenture. These workers are bound to silence by nondisclosure agreements, and their hiring went unannounced by Facebook.

Likewise, the actions of these moderators go unreported. Facebook releases quarterly ‘transparency reports’ for its own platform and subsidiary Instagram, detailing how many accounts were banned or otherwise disciplined and for what, but does not do this for WhatsApp.

Many of the messages reviewed by moderators are flagged in error. WhatsApp has two billion users who speak hundreds of languages, and staff sometimes have to rely on Facebook’s translation tool to analyze flagged messages, which one employee said is “horrible” at decoding local slang and political content. 

Aside from false reports submitted as pranks, moderators have to analyze perfectly innocent content highlighted by AI. Companies using the app to sell straight-edge razors have been flagged as selling weapons. Parents photographing their bathing children have been flagged for child porn, and lingerie companies have been flagged as forbidden “sexually oriented business[es].” 

“A lot of the time, the artificial intelligence is not that intelligent,” one moderator told ProPublica.

Also on rt.com Facebook apologizes after its AI put ‘primates’ label on video about black men

WhatsApp acknowledged that it analyzes messages to weed out “the worst” abusers, but doesn’t call this “content moderation.” 

“We actually don’t typically use the term for WhatsApp,” Director of Communications Carl Woog told ProPublica. “The decisions we make around how we build our app are focused around the privacy of our users, maintaining a high degree of reliability and preventing abuse.”

Facebook has lied about its commitment to user privacy before. Two years after Zuckerberg assured users that his company would keep WhatsApp ad-free and let the company “operate completely autonomously,” he revealed plans to link WhatsApp accounts to Facebook for the purposes of ad targeting. This move earned Facebook a $122 million fine from EU antitrust regulators, who said the Facebook CEO had “intentionally or negligently” deceived them.

Despite Zuckerberg’s assurances of privacy, WhatsApp shares more user metadata (data that can identify a user without the content of their messages) with law enforcement than rival messaging services from Apple and Signal. This metadata, which can reveal phone numbers, location, timestamps, and more, is valuable to law enforcement and intelligence agencies, with NSA whistleblower Edward Snowden’s 2013 leaks revealing a large-scale operation by the agency to capture the metadata of millions of Americans’ communications.

“Metadata absolutely tells you everything about somebody’s life,” former NSA General Counsel Stewart Baker once said. “If you have enough metadata, you don’t really need content.”

Also on rt.com WhatsApp delays privacy policy update as fleeing users voice concerns over Facebook data-sharing with NO opt-out

Across all of its platforms, Facebook complies with 95% of requests for metadata. While it is unknown what law enforcement has been able to glean from WhatsApp metadata, the US Department of Justice has requested this metadata more than a dozen times since 2017 and likely far more frequently, given many of these requests are not made public. WhatsApp metadata has been used to jail Natalie Edwards, a former Treasury Department official who leaked confidential banking reports about suspicious transactions to BuzzFeed News.

Inside WhatsApp, the company stresses the importance of promoting itself as a privacy-focused operation. A marketing document obtained by ProPublica states that WhatsApp should portray itself as “courageous,” taking a “strong, public stance that is not financially motivated on things we care about,” such as defending encryption and user privacy.

However, another line in the same document states that “future business objectives” mean that “while privacy will remain important, we must accommodate for future innovations.”

Think your friends would be interested? Share this story!

Podcasts
0:00
26:13
0:00
24:57