Apple has announced that with the new iOS 8 release they are no longer able to comply with law enforcement warrants to decrypt the contents of iPhones and iPads.
On devices running iOS 8, your personal data such as photos, messages (including attachments), email, contacts, call history, iTunes content, notes, and reminders is placed under the protection of your passcode. Unlike our competitors, Apple cannot bypass your passcode and therefore cannot access this data. So it’s not technically feasible for us to respond to government warrants for the extraction of this data from devices in their possession running iOS 8.
As soon as I heard about this, I figured it would provoke outrage from the usual quarters, invoking the standard list of villains. Terrorists! Drug dealers! Child pornographers! Oh My! Here’s the first example I found:
Ronald T. Hosko, the former head of the FBI’s criminal investigative division, called the move by Apple “problematic,” saying it will contribute to the steady decrease of law enforcement’s ability to collect key evidence — to solve crimes and prevent them. The agency long has publicly worried about the “going dark” problem, in which the rising use of encryption across a range of services has undermined government’s ability to conduct surveillance, even when it is legally authorized.
“Our ability to act on data that does exist . . . is critical to our success,” Hosko said. He suggested that it would take a major event, such as a terrorist attack, to cause the pendulum to swing back toward giving authorities access to a broad range of digital information.
So Hosko went with “terrorists.” I will leave finding examples mentioning drug dealers and child pornographers as an exercise for the reader.
I’m not too concerned about about the general outrage (yet), but I do want to address the concerns raised by Orin Kerr, because they are more thoughtful than the usual law-and-order hysterics, and because they are wrong and dangerous to civil liberties.
If I understand how it works, the only time the new design matters is when the government has a search warrant, signed by a judge, based on a finding of probable cause. Under the old operating system, Apple could execute a lawful warrant and give law enforcement the data on the phone. Under the new operating system, that warrant is a nullity. It’s just a nice piece of paper with a judge’s signature. Because Apple demands a warrant to decrypt a phone when it is capable of doing so, the only time Apple’s inability to do that makes a difference is when the government has a valid warrant. The policy switch doesn’t stop hackers, trespassers, or rogue agents. It only stops lawful investigations with lawful warrants.
That’s just not true. I think Orin is probably an honorable guy, but he’s repeating a lie that a lot of people would like you to believe. The truth is that anything that Apple does to protect our data from the government also protects our data from malicious people inside Apple itself. After all, in order for Apple to be able to decrypt our iPhone data for the government, Apple has to be able to decrypt our iPhone data.
In order to do that, Apple has to have people somewhere within its organization who have access to software and cryptography keys that can crack iPhone encryption, which makes it possible that someday an employee could walk out of Apple headquarters carrying a MacBook full of software that can break the security on half a billion iPhones.
In addition, Apple having the ability to crack its phones’ security creates a brittle break of iPhone security. It’s like putting an elaborate $1000 electronic lock on every door in an office building and keeping the keycard programmer in the building superintendent’s office. Anyone with the burglary skills to break in to the super’s office can ransack the rest of the building with ease. And anyone who gets a hold of Apple’s iPhone cracker can read every iPhone in the world.
That sort of high-value target is very tempting for hackers. And when I say hackers, remember that it’s not just rebellious college kids working out of their dorm. Commercial hacking is a serious criminal enterprise, run by the same kinds of people that run drug smuggling rings and extortion rackets. Making matters worse are the various national intelligence agencies in places like Russia, China, and Iran that might find it worthwhile to spend tens of millions of dollars on a technical and human intelligence program to compromise iPhone security, and the security of everything we can reach from our iPhones. And since plenty of foreigners use iPhones, I wouldn’t be surprised if the NSA has already stolen the keys from Apple.
Apple’s design change [is] one it is legally authorized to make, to be clear. Apple can’t intentionally obstruct justice in a specific case, but it is generally up to Apple to design its operating system as it pleases. So it’s lawful on Apple’s part. But here’s the question to consider: How is the public interest served by a policy that only thwarts lawful search warrants?
I think I’ve explained quite well how that public interest is served, because Apple’s changes don’t just thwart lawful search warrants, they also thwart malicious hacking and bad actors inside Apple. Once you remove this false assumption, Orin Kerr’s post falls apart.
Orin’s argument worries me for another reason, however, because he frames the issue in a way that is dangerous for the future of privacy. For example, at one point, this is how he responds to the argument that there are technical alternatives available to law enforcement even with Apple’s changes:
These possibilities may somewhat limit the impact of Apple’s new policy. But I don’t see how they answer the key question of what’s the public interest in thwarting valid warrants. After all, these options also exist under the old operating system when Apple can comply with a warrant to unlock the phone. And while the alternatives may work in some cases, they won’t work in other cases. And that brings us back to how it’s in the public interest to thwart search warrants in those cases when the alternatives won’t work. I’d be very interested in the answer to that question from defenders of Apple’s policy. And I’d especially like to hear an answer from Apple’s General Counsel, Bruce Sewell.
You know what? I don’t give a damn what Apple thinks. Or their general counsel. The data stored on my phone isn’t encrypted because Apple wants it encrypted. It’s encrypted because I want it encrypted. I chose this phone, and I chose to use an operating system that encrypts my data. The reason Apple can’t decrypt my data is because I installed an operating system that doesn’t allow them to.
I’m writing this post on a couple of my computers that run versions of Microsoft Windows. Unsurprisingly, Apple can’t decrypt the data on these computers either. That this operating system software is from Microsoft rather than Apple is beside the point. The fact is that Apple can’t decrypt the data on these computers is because I’ve chosen to use software that doesn’t allow them to. The same would be true if I was posting from my iPhone. That Apple wrote the software doesn’t change my decision to encrypt.
This touches on another thing that Orin seems to miss, which is that Apple’s new policy is not particularly unusual. In situations that demand high-security, it’s kind of the industry standard.
I’ve been using the encryption features in Microsoft Windows for years, and Microsoft makes it very clear that if I lose the pass code for my data, not even Microsoft can recover it. I created the encryption key, which is only stored on my computer, and I created the password that protects the key, which is only stored in my brain. Anyone that needs data on my computer has to go through me. (Actually, the practical implementation of this system has a few cracks, so it’s not quite that secure, but I don’t think that affects my argument. Neither does the possibility that the NSA has secretly compromised the algorithm.)
Microsoft is not the only player in Windows encryption. Symantec offers various encryption products, and there are off-brand tools like DiskCryptor and TrueCrypt (if it ever really comes back to life). You could also switch to Linux, which has several distributions that include whole-disk encryption. You can also find software to encrypt individual documents and databases.
If you use another company to store your data in the cloud, you can use encryption to ensure that they can’t read what they’re storing. Your computer would just encrypt files before uploading then, and then decrypt them when retrieving them. For example, EMC’s Mozy backup gives you the option of letting the service do the decryption or doing it yourself with a private key, as do Jungle Disk and Code42 Software’s Crashplan encrypted backup. Dropbox doesn’t offer client-side encryption, so they can read the data you send them, but there are third-party tools such as SafeMonk that run on your computer and encrypt the data before Dropbox ever sees it.
I guess the point I’m trying to make is that it’s not Apple’s data, and it’s not Apple that makes the decision to encrypt the data. It’s our data, and we decide whether to encrypt it or not. Apple is just one of several companies that supply the tools we use to do that.
Orin Kerr’s viewpoint seems to elevate Apple’s participation in the process, to treat Apple as somehow responsible for preserving law enforcement access to data that is not even in its possession. That’s not a model I’m comfortable with as the basis for legislation. I don’t want to normalize the idea that the providers of our information tools are obligated to subvert those tools because it makes the government’s job easier.
Orin suggests that might be a possibility:
The most obvious option would be follow the example of CALEA and E911 regulations by requiring cellular phone manufacturers to have a technical means to bypass passcodes on cellular phones. In effect, Congress could reverse Apple’s policy change by mandating that phones be designed to have this functionality. That would restore the traditional warrant requirement.
CALEA is bad enough in requiring carriers to have the technological ability in place to allow law enforcement agencies to tap telephone and internet traffic traversing the carriers’ networks. What Orin is suggesting (although not advocating) goes far beyond that, by requiring computer systems manufacturers to intentionally subvert their customers’ information security, even if unlike the CALEA scenario, the customer’s information never leaves the customer’s hands. It seems like a slippery slope that could eventually lead to a requirement for every electronic device in our lives to be able to spy on us at the government’s request.
As for restoring the “traditional warrant requirement,” my understanding is that a warrant allows the government to intrude on someone’s privacy to gather evidence. But can a traditional warrant be used to compel a third party to intrude on someone’s privacy? If the government gets a warrant to plant a bug to hear what my wife and I talk about at home, they might ask a locksmith to help them break into my house, but could they use that warrant to force the locksmith to help them? If they want to test my blood for drugs, can they use a warrant to force the nearest doctor to draw my blood and the nearest lab to test it? If they want to surveil a suspect, can a judge order me to grab my camera and take pictures of him?
(For that matter, I don’t quite understand how the government can force Apple to decrypt a phone. I’m guessing that it’s because Apple has some special cryptographic key that makes it easier, and it’s less destructive to privacy for Apple to decrypt a phone than for Apple to turn that key over to the government, but I could be totally wrong.)
Frankly, I’m not convinced that the “traditional warrant requirement” is applicable to encrypted data. Search warrants have always been about the government’s authority to search, but given enough manpower, equipment, and time, the government’s physical ability to conduct the search has never been an issue. The agents of law enforcement have always been able to knock down every door, rip open every wall, and break every box.
Until now.
Modern strong encryption is effectively unbreakable with current technology. Securely encrypted data can only be read by someone who has the decryption key. And if every copy of the decryption key is destroyed, nobody will ever be able to read that data again. (Not using current technology. Not before the stars burn out.) It’s like some sort of science fiction scenario where the data is sealed off in another dimension.
So what should happen to the government’s authority to break every box when someone invents an unbreakable box? It’s not clear to me that the solution is, or should be, requiring the makers of unbreakable boxes to build in secret levers to open them.
(Hat tip: Scott Greenfield)
[…] a couple of days ago I was explaining why Orin Kerr was wrong about Apple’s new policy of rendering themselves unable to encrypt customers’ iPhones, […]