Last week, Apple announced perhaps the most polarizing new technology it’s ever created: an on-device photo hashing system called NeuralHash meant to keep an eye out for any child sexual abuse material. NeuralHash immediately sent up bright red flags in the tech community.
Apple’s own staffers aren’t too pleased about the update either, as it turns out. Employees have sent more than 800 messages on Apple’s internal Slack channel since the plan was announced last week, anonymous workers told Reuters. Many of the messages spoke to NeuralHash’s long-term implications, especially in its potential misuses by repressive governments hoping to find material for censorship or even arrests.
The iPhone-maker has, in the past, been 100-percent steadfast in its policy to not allow special government backdoor access to its smartphones or other devices. Of course, employees are wondering what’s changed in the interim — they’d be remiss not to.
Pushback from all sides — Apple has long faced pressure from the U.S. government to keep illegal content off its platforms. That pressure has only increased as the tech giant has continually refused special government access to its networks. NeuralHash — which automatically flags child sexual abuse content for law enforcement — is Apple letting up just a little bit in law enforcement’s favor.
But Apple also sees near-constant criticism from the other side of that argument as well, in the form of privacy watchdogs and a fearful public. Within 24 hours of its announcement, NeuralHash had spurred statements from not one but two formal objections: one from the Electronic Frontier Foundation (EFF) and one from the Center for Democracy and Technology (CDT). Both worry NeuralHash will open up the very government backdoors Apple has long fought against.
Now we can add Apple’s own employees to that list, too.
Time for transparency — Apple has yet to directly respond to these open complaints, and the company declined to comment when Reuters reached out. In announcing NeuralHash, Apple was already somewhat on the defensive, quoting a figure that estimates NeuralHash only sends up one false flag in a trillion images.
But that kind of statistic doesn’t really do anything to quell the larger concerns with NeuralHash. The fear is that, once the government has been given specialized access to technology that flags photos, it will use that power in ways Apple hasn’t intended.
It’s unlikely that Apple will fully retract its intention to roll out NeuralHash in iOS 15. That being said, it’s imperative that the company create more transparency around the technology. Apple needs to address these specific concerns if it hopes to quell the criticism and give both its users and its employees some peace of mind.