By Bruce Schneier
The so-called Crypto Wars have been going on for 25 years now. Basically, the FBI—and some of their peer agencies in the U.K., Australia, and elsewhere—argue that the pervasive use of civilian encryption is hampering their ability to solve crimes and that they need the tech companies to make their systems susceptible to government eavesdroping. Sometimes their complaint is about communications systems, like voice or messaging apps. Sometimes it's about end-user devices. On the other side of this debate is pretty much all technologists working in computer security and cryptography, who argue that adding eavesdropping features fundamentally makes those systems less secure.
A recent entry in this debate is a proposal by Ian Levy and Crispin Robinson, both from the U.K.'s GCHQ (the British signals-intelligence agency—basically, its NSA). It's actually a positive contribution to the discourse around backdoors; most of the time government officials broadly demand that the tech companies figure out a way to meet their requirements, without providing any details. Levy and Robinson write:
In a world of encrypted services, a potential solution could be to go back a few decades. It's relatively easy for a service provider to silently add a law enforcement participant to a group chat or call. The service provider usually controls the identity system and so really decides who's who and which devices are involved—they're usually involved in introducing the parties to a chat or call. You end up with everything still being end-to-end encrypted, but there's an extra 'end' on this particular communication. This sort of solution seems to be no more intrusive than the virtual crocodile clips that our democratically elected representatives and judiciary authorise today in traditional voice intercept solutions and certainly doesn't give any government power they shouldn't have.
On the surface, this isn't a big ask. It doesn't affect the encryption that protects the communications. It only affects the authentication that assures people of whom they are talking to. But it's no less dangerous a backdoor than any others that have been proposed: It exploits a security vulnerability rather than fixing it, and it opens all users of the system to exploitation of that same vulnerability by others.
In a blog post, cryptographer Matthew Green summarized the technical problems with this GCHQ proposal. Basically, making this backdoor work requires not only changing the cloud computers that oversee communications, but it also means changing the client program on everyone's phone and computer. And that change makes all of those systems less secure. Levy and Robinson make a big deal of the fact that their backdoor would only be targeted against specific individuals and their communications, but it's still a general backdoor that could be used against anybody.
The basic problem is that a backdoor is a technical capability—a vulnerability—that is available to anyone who knows about it and has access to it. Surrounding that vulnerability is a procedural system that tries to limit access to that capability. Computers, especially internet-connected computers, are inherently hackable, limiting the effectiveness of any procedures. The best defense is to not have the vulnerability at all.
That old physical eavesdropping system Levy and Robinson allude to also exploits a security vulnerability. Because telephone conversations were unencrypted as they passed through the physical wires of the phone system, the police were able to go to a switch in a phone company facility or a junction box on the street and manually attach alligator clips to a specific pair and listen in to what that phone transmitted and received. It was a vulnerability that anyone could exploit—not just the police—but was mitigated by the fact that the phone company was a monolithic monopoly, and physical access to the wires was either difficult (inside a phone company building) or obvious (on the street at a junction box).
The functional equivalent of physical eavesdropping for modern computer phone switches is a requirement of a 1994 U.S. law called CALEA—and similar laws in other countries. By law, telephone companies must engineer phone switches that the government can eavesdrop, mirroring that old physical system with computers. It is not the same thing, though. It doesn't have those same physical limitations that make it more secure. It can be administered remotely. And it's implemented by a computer, which makes it vulnerable to the same hacking that every other computer is vulnerable to.
This isn't a theoretical problem; these systems have been subverted. The most public incident dates from 2004 in Greece. Vodafone Greece had phone switches with the eavesdropping feature mandated by CALEA. It was turned off by default in the Greek phone system, but the NSA managed to surreptitiously turn it on and use it to eavesdrop on the Greek prime minister and over 100 other high-ranking dignitaries.
There's nothing distinct about a phone switch that makes it any different from other modern encrypted voice or chat systems; any remotely administered backdoor system will be just as vulnerable. Imagine a chat program added this GCHQ backdoor. It would have to add a feature that added additional parties to a chat from somewhere in the system—and not by the people at the endpoints. It would have to suppress any messages alerting users to another party being added to that chat. Since some chat programs, like iMessage and Signal, automatically send such messages, it would force those systems to lie to their users. Other systems would simply never implement the "tell me who is in this chat conversation" feature—which amounts to the same thing.
And once that's in place, every government will try to hack it for its own purposes—just as the NSA hacked Vodafone Greece. Again, this is nothing new. In 2010, China successfully hacked the back-door mechanism Google put in place to meet law-enforcement requests. In 2015, someone—we don't know who—hacked an NSA backdoor in a random-number generator used to create encryption keys, changing the parameters so they could also eavesdrop on the communications. There are certainly other stories that haven't been made public.
Simply adding the feature erodes public trust. If you were a dissident in a totalitarian country trying to communicate securely, would you want to use a voice or messaging system that is known to have this sort of back door? Who would you bet on, especially when the cost of losing the bet might be imprisonment or worse: the company that runs the system, or your country's government intelligence agency? If you were a senior government official, or the head of a large multinational corporation, or the security manager or a critical technician at a power plant, would you want to use this system?
Of course not.
Two years ago, there was a rumor of a WhatsApp backdoor. The details are complicated, and calling it a backdoor or a vulnerability is largely inaccurate—but the resultant confusion caused some people to abandon the encrypted messaging service.
Trust is fragile, and transparency is essential to trust. And while Levy and Robinson state that "any exceptional access solution should not fundamentally change the trust relationship between a service provider and its users," this proposal does exactly that. Communications companies could no longer be honest about what their systems were doing, and we would have no reason to trust them if they tried.
In the end, all of these exceptional access mechanisms, whether they exploit existing vulnerabilities that should be closed or force vendors to open new ones, reduce the security of the underlying system. They reduce our reliance on security technologies we know how to do well—cryptography—to computer security technologies we are much less good at. Even worse, they replace technical security measures with organizational procedures. Whether it's a database of master keys that could decrypt an iPhone or a communications switch that orchestrates who is securely chatting with whom, it is vulnerable to attack. And it will be attacked.
The foregoing discussion is a specific example of a broader discussion that we need to have, and it's about the attack/defense balance. Which should we prioritize? Should we design our systems to be open to attack, in which case they can be exploited by law enforcement—and others? Or should we design our systems to be as secure as possible, which means they will be better protected from hackers, criminals, foreign governments and—unavoidably—law enforcement as well?
This discussion is larger than the FBI’s ability to solve crimes or the NSA’s ability to spy. We know that foreign intelligence services are targeting the communications of our elected officials, our power infrastructure, and our voting systems. Do we really want some foreign country penetrating our lawful-access backdoor in the same way the NSA penetrated Greece’s?
I have long maintained that we need to adopt a defense-dominant strategy: We should prioritize our need for security over our need for surveillance. This is especially true in the new world of physically capable computers. Yes, it will mean that law enforcement will have a harder time eavesdropping on communications and unlocking computing devices. But law enforcement has other forensic techniques to collect surveillance data in our highly networked world. We'd be much better off increasing law enforcement’s technical ability to investigate crimes in the modern digital world than we would be to weaken security for everyone. The ability to surreptitiously add ghost users to a conversation is a vulnerability, and it's one that we would be better served by closing than exploiting.
No comments:
Post a Comment