The Crypto Wars

Since the advent of the ‘Crypto Wars’ (as in cryptography) starting in the early 1990’s, there has emerged a general global trend of intelligence services, law enforcement agencies and governments pushing back against the proliferation of so called ‘warrant-proof’ encryption: encrypted data that the state is technically unable to access. The rationale from these entities is that unlike traditional forms of data that are accessible following appropriate judicial safeguards such as a warrant, encrypted information circumvents the rule of law and the legitimacy and capabilities of criminal/ intelligence investigations. Typically invoking the age old ‘Four Horsemen of the Infocalypse’: a variation of child predators, terrorism, organized crime and money launderers, states have consistently, across the political spectrum, sought to scare the general public into supporting their encryption undermining efforts using oversimplified and damaging narratives.

This pushback against encryption has manifested itself as attempts from states worldwide to implement ‘lawful access’ mechanisms into encrypted software, in essence mandating backdoors into the software and services that we all use everyday in order to facilitate state access. The notion of granting state institutions decryption powers technologically implemented into mainstream software and services has been lambasted and fiercely opposed by leading cyber security researchers, civil society groups and industry for introducing severe cybersecurity and privacy risks, with efforts from lawmakers across liberal democratic states to pass such legislation typically being frustrated. The reasoning from these groups can be summed up as the facilitation of backdoor access for the ‘good guys’ invariably necessitates backdoor access for the ‘bad guys’ as well. In response to this, states have ramped up efforts to shape public discourse in favor of mandating ‘lawful access’ such as in the UK, the US and the EU with the goal of manipulating public opinion around encryption raising alarms. See below for a great explainer on the subject of encryption backdoors.

Concerning as this may be, a new paradigm shift in the encryption debate is rapidly emerging: a shift from the traditional attempts to introduce a ‘key under the doormat’ type of backdoor to that of mandating client side scanning (CSS). This shift represents an unprecedented attempt at enlarging the modern surveillance state into the devices and software used by everyone of us. As of this blog posts date legislative proposals that amount to mandatory client side scanning have been implicitly or explicitly put forward in the European Union, the United Kingdom and the United States with similar rules having already been implemented in India.

What is Client Side Scanning?

Client side scanning, as described by the Internet Society, “broadly refers to systems that scan message contents—i.e. text, images, videos, files—for matches or similarities to a database of objectionable content before the message is sent to the intended recipient.” At first glance this essentially circumvents the encryption process without the breaking of sound cryptography, with content being scanned prior to encryption and transmission locally. Under current governmental narratives client side scanning would be deployed to combat child sexual abuse material and terrorist content either through the perceptual hashing (comparing) of user uploaded content against databases of known content or through artificial intelligence driven classifiers or what can be classed as predictive models, generated from large datasets by machine learning algorithms.

As to a real world example of how CSS could be implemented we can take the hypothetical example of Signal (note: Signal has stated they would not comply with CSS mandates). Imagine you would like to send a message to a friend. Normally the message is end-to-end encrypted which ensures that only you and your friend are able to view it ensuring your confidentiality of communications. Under a CSS regime, before being encrypted and sent, the message would be scanned for objectionable content locally on your device and then encrypted once it has cleared the screening. Should the message, whether it be an image, video file, text based or an audio recording, run afoul of the screening process it would then be blocked and/ or transmitted to the service provider -Signal in this example- and the relevant state authorities for further review.

Client side scanning represents an evolution of the lawful access debate in that the obligations imposed, do not at first glance, require the compromising of encryption standards and security through traditionally proposed lawful access mechanisms. It has thus been seen by many, such as 2 GCHQ cybersecurity chiefs, as a compromise between user privacy and surveillance goals as it is seen to preserve user privacy while detecting only targeted content. This will be analyzed below.

Privacy and Security Vulnerabilities

As stated previously systems of client side scanning would necessitate the creation of a monitoring mechanism built into the applications, services and platforms that comprise the modern internet architecture. Such systems have numerous drawbacks upon being analyzed from a security and privacy perspective. These drawbacks can be broadly categorized into three overarching themes: CSS technology’s non-zero false positive rate; created security and privacy vulnerabilities and potential for evasion by knowledgeable adversaries.

Non-zero False Positive Rate

Client side scanning technologies have a non-zero false positive rate and false negative rate, with this raising concerns regarding the impact upon the privacy of innocent, law abiding individuals subject to false positives. The proprietary and ‘black box’ nature of perceptual hashing tools and the difficulty in actually gauging their ‘accuracy’ makes determining their false positive rate difficult. With regards to artificial intelligence driven machine learning methods, predictive accuracy rates can arguably be said to be much lower. Language models applied to text content for the purposes of content moderation are difficult to create with a false positive rate “significantly below 5% to 10%”. Image detection in its current state does not fare better with current sophisticated image-recognition algorithms still mistaking a dog for a cat and flagging innocent content as criminal. This presents a major issue with scanning technology, being described by European Digital Rights in reference to the EU’s CSS proposal as, being “hard put to tell the difference between a topless sunbather or a child’s bath-time photo from an abuse scenario, or to infer whether a person is a teenager or just a young-looking adult.

Security and Data Privacy Vulnerabilities

In addition to the high risks of false positives, the implementation of CSS into such a plethora of devices, digital services, platforms and applications will undoubtedly create security and privacy vulnerabilities capable of being exploited across a spectrum of threat models, from nation state attackers to malicious private actors. These threats range from the framing of victims through hash manipulation, to the underlying CSS being exploited and repurposed for wire-tap like physical surveillance capabilities. The expanded attack surface on devices, apps and services would likely pose a cybersecurity risk in encrypted environments. These security and privacy risks have lead to 14 globally leading cybersecurity experts to conclude that client side scanning “cannot be deployed safely” for the purposes of a surveillance system.

Easy Evasion by Knowledgeable Adversaries

Leading on from this, there is mounting evidence that client side scanning technologies can be defeated by knowledgeable adversaries either through evading detection or outright avoidance of the entire system of scanning. For example an encrypted keyboard could be utilized to “encrypt and decrypt messages locally on phone devices when sending and receiving them via IM [instant messaging] applications.” In addition, CSS systems could be easily avoided in their entirety through the use of free (libre) open source software (FLOSS) that does not and would not implement scanning capabilities in order to comply with any scanning obligations. Some examples of this would be the Tor network and browser, Peer-to-Peer networks and a whole host of private messaging applications. In order to enforce client side scanning obligations, states would have to implement a degree of internet censorship a concern that has been raised by the Free Software Foundation.

Aiding Digital Authoritarians

The application and enforcement of client side scanning regimes is likely to have a global impact due to the technical and financial realities of modern software development. Just like the EU’s General Data Protection Regulation has an externalizing regulatory effect it makes sense that many firms would elect to institute CSS worldwide rather than creating separate application builds for individual regions and countries. With alarms being raised about democratic ‘backsliding’ and the rise of ‘digital authoritarianism’ worldwide, client side scanning mandates from liberal democratic states could open the (encrypted) door for states with lesser judicial protection of and respect for the rule of law and fundamental rights to demand and exploit the underlying scanning system to be used for authoritarian repressive purposes.

Furthermore, by attempting to weaken and rebuff strong effective encryption democratic states permit digital authoritarians to rebuff legitimate human rights concerns by pointing to the West. As described by Edward Snowden, in reference to Apple’s similarly proposed CSAM detection system and the risks of authoritarian abuse, “this is not a slippery slope. It’s a cliff.”

Unmitigated Mass Surveillance

Currently proposed systems of client side scanning, as just covered, have major problems with less than acceptable false positive rates and security vulnerabilities that have major implications for individual privacy and civil liberties. But even if these problems were to be mitigated, client-side scanning technologies shift the screening from “what is shared (the cloud) and what is private (the user’s device)” digitally “removing any boundary between a user’s private sphere and their shared (semi-)public sphere,” ensuring that there will always be the possibility for law enforcement and intelligence services to access previously private information, even in the absence of a warrant or other appropriate judicial safeguards.

The potential for CSS systems to be abused by state actors and exploited through vulnerabilities by both state and non-state entities is also of significant concern due to the highly intrusive nature of client side scanning, which is capable of revealing highly intimate and personal information about users and their devices. These are all major concerns also shared by the UN High Commissioner for Human Rights, with a report concluding that client side scanning technologies could not be “considered proportionate under international human rights law, even when imposed in pursuit of legitimate aims”. The concern thus leads on as to how such a system of general client side scanning could be limited in scope to their purported aims and not expand in competence through what Bruce Schneier observes in his book Data and Goliath as “the inevitable expansion of power that accompanies the expansion of any large and powerful bureaucratic system”; mission creep.

What is often framed as a debate about moderation of unwanted content in E2EE services is really a discussion about (any) content detection in E2EE. -Center for Democracy and Technology

Today’s systems of surveillance in liberal democratic states are primarily used to pursue legitimate but broad aims such as national security and the combating of crime rather than for oppressive control and authoritarian purposes as seen in the People’s Republic of China and other digital authoritarian states. However, this is not a result of underlying differences in surveillance architecture but rather a difference in governance from those that hold power over said systems. You may be inclined to trust your government of today but what about the government which wields the reigns of power tomorrow? “Even established democracies might decay. There is a risk that surveillance capacities that are used for democratically legitimated purposes today will be used for poorly legitimated purposes in the future.”

Politicians need to understand the technological realities of encryption and avoid simplifying narratives and public fearmongering. The three rules laid out in a paper laying down rules to facilitate open debate about lawful access to strongly encrypted information: Talking in the Dark can serve as a guide and starting point for legislative proposals seeking to tackle the difficulties of digital investigations. These ’rules’ prescribe: 1) Legislation should clearly indicate, without sidestepping the encryption debate through ambiguity or subterfuge, whether it can be utilized to “mandate lawful access”; 2) officials “should not deliberately oversimplify the encryption debate or rely on emotive examples in order to influence public opinion”; 3) Judicial safeguards “must not be conflated with the safeguards applicable to lawful access solutions in order to lend false legitimacy to unsafe solutions”. Adherence to these three rules would ensure a responsible legislative debate that does not result in measures that threaten our privacy, security and ultimately fundamental rights and liberties.

It is my view that the proposed implementations and operation of what can be coined “the most sophisticated mass surveillance machinery ever deployed outside of China and the USSR” raises one very fundamental question. Do we, as a hyper-digitized society, want a system of highly invasive and total digital surveillance, irrespective of its purpose and intent, to be embedded within all of our devices? To scan and screen all of our private and confidential communications? To exist at all?

I would, of course, argue not.


Were CSS to be widely deployed, the only protection would lie in the law. That is a very dangerous place to be. -Bugs in Our Pockets: The Risks of Client-Side Scanning


Disclaimer: I do not, by any means, claim to be an expert in matters relating to privacy, security and law or offer what can be construed as guaranteed fool-proof advice. What I do offer is an insight into these matters from someone who is highly invested in personal privacy/ security themselves and who is studying technology law at the level of higher education.