Brits Recycle Old Arguments to Bypass E2E Encryption

Brits Recycle Old Arguments to Bypass E2E Encryption

Comment Two notorious figures in the UK security services have published an article that once again suggests that breaking strong end-to-end encryption would be good for society.

Almost four years ago, Ian Levy, technical director of the UK National Cyber ​​​​Security Centre, together with the technical director of cryptanalysis at the British spy agency GCHQ Crispin Robinson, published an article arguing for “virtual alligator clips” on encrypted communications that could be used to keep us all safe from harm. They gave it another shot on Thursday, with a new paper pushing a very similar argument, while acknowledging its failings.

“This paper is not a rigorous safety analysis, but seeks to show that there are ways to counter much of the harm of online child sexual abuse today, but also to show the extent and amount of work that remains to be done in this area,” they write.

“We have not identified any technique that could provide as accurate detection of child sexual abuse material as content scanning, and while the privacy considerations that this type of technology raises should not be ignored, we have presented arguments that suggest it should be possible to deploy in configurations that mitigate many of the most serious privacy concerns.”

The somewhat dynamic duo argue that to protect against the sexual abuse of children and the material they produce, it is in everyone’s interest that law enforcement have some sort of access to private communications. The same argument has been used many times before, usually against one of the four horsemen of the infocalypse: terrorists, drug traffickers, child sexual exploitation material (CSAM), and organized crime.

Their proposal is to revive attempts at automated filtering, in particular by asking service providers – who ostensibly offer encrypted communications – to insert themselves into the process to verify that CSAM is not being disseminated online. This could be done by an AI trained to detect this type of material. Law enforcement could then be notified and work with these companies to crack down on the scourge of CSAM.

Apple infamously tried to make the same argument to its users last year before abandoning client-side monitoring. It turns out promising privacy and then admitting you’re going to analyze user chatter and data isn’t a popular selling point.

Apple can’t solve it, neither can we

In their latest article, Levy and Robinson argue that this is not a major problem, as non-governmental organizations could be used to moderate the automatic scanning of personal information for prohibited material. This would prevent potential abuse of such a scheme, they argue, and only the culprits would have anything to fear.

This isn’t a new argument, and it’s been used time and time again in the dispute between encryption proponents who love private conversations and governments who don’t. Tech experts mostly agree that such a system cannot be protected from abuse: the analysis could be hijacked, it could flag innocent but private content as false positives, or it could be gradually extended to block what politicians want to remove. Governments would prefer to think otherwise, but the document at least recognizes that people seeking confidentiality are not suspects.

“We recognize that for some users in some circumstances, anonymity is, in itself, a security feature,” Levy and Robinson say. “We are not trying to suggest that anonymity on basic services is inherently bad, but it does have an effect on the problem of child sexual abuse.”

Which is a sweet way of saying that conversations can be used to plan crimes, so they need to be monitored. No one denies the incredible harm that comes from CSAM scum, but allowing all private communications to be monitored – even by a third party – seems like a very high price to pay.

Apple backed out of its plan to automatically scan user data for this type of content, in part because it built its marketing model around selling privacy as a service to customers — although that offer does not apply in China. Therein lies the point: while Apple is willing to let the Mandarins of the Middle Kingdom interfere, there’s no guarantee it won’t do the same for other nations if it’s in the game. interest of the company.

The Cupertino tech would search for images using the NeuralHash machine learning model to identify CSAM, a model which the British duo said “should be reasonably simple to design”. The problem is that the same technology could also be used to identify, filter and flag other images, such as images mocking political leaders or expressing a point of view that someone wants to monitor.

Levy and Robinson believe this is a fixable problem. More research is needed. Additionally, human moderators should be able to step in to detect false positives before suspicious images are forwarded to law enforcement for investigation.

not my problem

Interestingly, both repeatedly point out that this is going to be the responsibility of the service providers to manage. While emphasizing that the document is not official government doctrine, it is clear that Her Majesty’s Government have no intention of supporting this project, or overseeing its operation.

“These security systems will be implemented by the service owner in their app, SDK, or browser access,” they state. “In this case, the software is of the same standard as the vendor’s application code, managed by the same teams with the same security entry.”

And allowing private companies to filter user data with government approval has always worked. then well in the past. This is a very old argument – ​​as old as encryption itself.

We first saw it appear in the 1970s when Whitfield Diffie and Martin Hellman published about public key encryption (something GCHQ had apparently developed independently years before.) Such systems were labeled as ammunition, and their use and export were severely restricted. three years of investigations in the 1990s to try to enable private digital conversations.

As recently as 2019, someone at the US Department of Justice slipped the leash and suggested they didn’t want a backdoor, but a front door – again using the CSAM argument. Some things never change. ®

#Brits #Recycle #Arguments #Bypass #E2E #Encryption

Leave a Comment

Your email address will not be published.