Security

Jeff Gamso has a terrific post on why we shouldn’t believe the TSA when they claim that their full-nude body scans of passengers are completely secure and won’t be abused or disclosed.

I have my own argument that I’ve used to respond to similar claims about government handling of our sensitive personal information. It goes something like this:

At the height of the cold war, the Soviets paid U.S. Navy Chief Warrant Officer John Walker Jr. a few thousand dollars a month for information about Navy encryption, eventually deciphering as many as a milllion messages with his help, some of them related to the submarine nuclear deterrence fleet.

During this same period, TRW  employee Chris Boyce and his friend Andrew Daulton Lee sold classified information about encryption and spy satellites to the Soviets, supposedly because Boyce was angry about CIA interference in the affairs of other nations.

Toward the end of the cold war, CIA counter-intelligence officer Aldrich Ames sold secrets to the Soviet Union in exchange for about $2 million, and FBI agent Robert Hanssen sold secrets to the Soviets and then Russia for 22 years in exchange for $1.4 million in cash and diamonds. Several of the people betrayed by Ames and Hanssen were executed.

These are just a few of the most famous examples of people who sold out their country for money, revenge, or other reasons. They were entrusted with extremely important information, the substantial capabilities of our national intelligence agencies were arrayed against them, and they still managed to betray us all.

But there’s no way a TSA agent would share a nude image of a passenger.

Google had a bit of an embarrassing security problem recently. An engineer did a very creepy thing and spied on teenager’s Google accounts while interacting with the teens online. Apparently no laws were broken but Google, obviously, fired the engineer. Google’s statement about the incident underwhelmed Greg Laden:

Sorry Google, we are not impressed. We’d like to see an independent investigation, possible prosecution, and who knows, maybe some new laws and regulations.

The idea that we should have some new laws to make systems such as Gmail more secure is an bad idea.

Because users see technology and security as a block box they are often blindsided when there is a failure or breach of trust. Greg is right that the response from Google is inadequate for most users. The response was fine for me. After all, I understand what happened and it didn’t surprise me. The problem is that the response didn’t address the trust that was broken with most of its users who don’t understand the systems inside that black box.

I dislike, however, the suggestion that new laws and regulations should be put in place to prevent such problems in the future. Making it illegal for system engineers to open data files without permission may decrease the number of incidents, but probably wouldn’t be effective at stopping such practices with just legal punishment as a deterrent. Making it impossible for engineers to see data will mean a fundamental change in the way such systems operate. Security is always a trade-off against usability and expense. Having the government choose that balance point and force it upon Google and other service providers is the wrong response.

I’ve always tried to address such issues with user education. Users often have a black-box mentality and think that such issues are somehow automatically taken care of by the system. Users (especially managers) need to be aware of just how much power system administrators have.

I worked as a sysadmin at a college when email was first introduced to staff. I taught users the old IT adage that email was the electronic equivalent of postcards. Every employee of the post office who touches that postcard can, if they so desire, read the message. I also made it clear that I had access to anything they stored on the server (including email) and even conducted security workshops showing them how easy it was for people like me to defeat the simple encryption used in the software of the time. I tried very hard to build the trust with my users that I wouldn’t abuse that power, but wanted them to know what was possible.

Google lost some trust from its user base. The response from Google was “Why would anyone trust such a system?” In one respect they are right. Users should never have trusted such a system. I don’t, but that’s because I understand some of what is going on inside the black box after clicking the “send” button.

Perhaps Google should be leading an effort to upgrade the security of email and other messaging services, but by working with users rather than working under new government regulations. Email protocols were not designed for security. Of course the basic protocol of the Internet (TCP/IP) was not designed for secure transactions either, yet I’m confident that my online banking transactions are secure because of an end-to-end protocol called SSL/TLS*.

Users can already make their email secure using a similar system (called PGP) if they wish, but few people know how. Perhaps Google should lead the effort by streamlining the user interface and popularizing such a system. Google would need to educate the users and work with them to figure out what level of security is needed and how much effort users would be willing to put into such a system to make it work. Users may have to maintain special keys, for example, to communicate with recipients on different email systems. While Google can make that process easier, it will still require some effort on the user end to gain that extra security. There is always a tradeoff.

Pushback against such encryption, however, would come from governments. Governments around the world, for example, freaked out once they realized they couldn’t snoop on people’s Blackberry accounts. The United States government fought the introduction of PGP when it was first introduced claiming it was too dangerous to allow the technology out of the country. (Because of our government’s insistence that PGP not be provided on the Internet I had to download my first copy from an overseas server.) The US government would certainly resist any pervasive end-to-end technology that would prevent them from reading email.

Government involvement in this issue seems like a bad idea. It would force providers to choose a level of security that people may not need once they understand that email is just a digital postcard. Any government solution would also build in a government backdoor allowing them access to any secure system. In this case I really would like the government to not get involved.


* I’ll provide a brief introduction to the concept of end-to-end encryption below. Anyone not interested in how this stuff works should stop reading now.

Transactions can be made secure on an inherently unsecure system by introducing an additional protocol (set of rules) above the unsecure layer providing a “session” that encrypts information before the unsecure protocol and only decrypts that information after the data goes beyond the unsecure protocol at the other end. Hence it’s an “end-to-end” system and doesn’t rely upon unsecure devices in the middle of the route taken by the data.

For example, the Internet uses an unsecure protocol called TCP/IP to get information from one computer to another, let’s say from your home computer to your bank. Rather than redesigning the unsecure protocol it is better to add an end-to-end encryption/decryption system “above” the unsecure protocol. When your computer talks to the computer at the bank it uses a system called Secure Sockets Layer / Transport Layer Security (SSL/TLS) to accomplish this.

end_to_end_01.gif

The green lines represent information that can be read since it is not encrypted (plaintext) The red lines represent the encrypted information (ciphertext) that no one can read. We don’t really know what is happening to the information in the blue lines, but we don’t care since it’s already been encrypted.

If you are not using an encrypted email client (most of the world does not) your message may still be encrypted in the same way as your bank information, but that is not end-to-end for an email message since there is a third party involved (the email recipient). Your message may be encrypted below the email client as your bank password was, but it will be decrypted before it gets to the email server where it is stored unencrypted, until the recipient asks for it from their email client. This means your email cannot be read by anyone eavesdropping somewhere in the Internet (what is called a man-in-the-middle attack), but it can be read by anyone with access to the file on the email server.

A program such as Pretty Good Privacy (PGP) can work with an email client to encrypt a message before your computer sends it to an email server. Your message will stay encrypted, even on the email server, until a similar program decrypts the message at the email client on the other side. This allows for end-to-end encryption even when messages are stored on servers awaiting delivery and the messages will stay encrypted in all locations other than at the sender’s and recipient’s email programs.

end_to_end_02.gif

Other messaging systems, such as SMS or chat rooms, can be designed to work the same way.

Scott Greenfield at Simple Justice has a long-running debate with the Volokh Conspiracy‘s Orin Kerr over how search and seizure laws should apply in the digital world. Briefly, Kerr advocates what he calls a “technology neutral” approach, in which we try to create a mapping between real world concepts and their digital analogs in order to apply centuries of real-world search and seizure jurisprudence to the digital world. Greenfield sees several problems with that:

Given that emails and other electronic communications are our future, and given that the means by which they are transmitted will never eliminate the involvement of third parties and the maintenance of copies on somebody’s equipment somewhere, are we satisfied to be arguing over the arrangement of deck chairs on the Titanic…?  Unless we develop a brand new approach to the future of communications, that does not rely on hard copy precedent and recognizes that people want to have a secure means of communication available to them in the future (and the future is a very long time), we’re watching the death of privacy in our own communications happen before our eyes.

This does not meet my reasonable expectation of privacy.  We need to rethink the approach, start to finish, to deal with the digital world and whether we will have any privacy whatsoever in our future communications.  How about a simple new rule: Emails are private communications and require a warrant upon probable cause as determined by a neutral magistrate?

I’m with Greenfield, mostly because I like the outcome of greater privacy. I think Kerr’s argument is going to win the day, however, because applying past law to present situations is what courts do. Radical change is the legislature’s job, and I just can’t see our current Congress giving a damn about our privacy.

To get an idea of the issues, check out Kerr’s lastest post on a sticky issue regarding just how and when people have Fourth Amendment rights in an email message: Eleventh Circuit Decision Largely Eliminates Fourth Amendment Protection in E-Mail.

I can’t help feeling, however, that there’s a technological issue both sides are missing. Kerr just barely mentions it in passing (emphasis mine):

The Fourth Amendment ordinarily protects postal mail and packages during delivery.  The same rule applies to both government postal mail and private delivery companies like UPS:  As soon as the sender drops off the mail in the mailbox, both the sender and recipient enjoy Fourth Amendment protection in the contents of the mail during delivery.  When the mail is delivered to the recipient, the sender loses his Fourth Amendment protection: The Fourth Amendment rights are transfered solely to the recipient.  In practice, this works pretty simply:  Each party has Fourth Amendment protection in the mail when they’re in possession of it, and both the sender and receiver have Fourth Amendment rights in the contents of the mail when the postal service or private mail carrier is holding the mail on their mutual behalf.

I should be clear that there are exceptions to these rules.  For example, if a person sends a letter in what the Postal Service used to call “Fourth Class” mail — that is, mail that the Postal Service reserves the right to open — then it is not protected by the Fourth Amendment.  See, e.g.,   Also, the Fourth Amendment protection only applies to the contents of the communication, not the outside.   But the basic approach has governed postal mail privacy for a long time.

The highlighted clause in the above paragraph is what I’m talking about. When we send postal mail, we consider the contents private, but we expect lots of people to see the outside of the envelope. Whe it comes to email, it’s usually considered to have an envelope too, in that information controlling the delivery — most prominently the recipient email address — is not generally considered part of the message.

The problem is that this division of an email message into envelope and letter is a fiction perpetrated by our email software. An email message in transit — whether held on a server traveling over a wire — is just an undifferentiated chunk of data. Once someone gets that data, both the envelope and the letter lay open to them. Unlike a real-world envelope, an email envelope doesn’t really protect anything.

In other words, as Phil Zimmerman has pointed out, email isn’t like sending a letter, it’s like sending a postcard. And who in their right mind would have any expectation of privacy in a postcard?

I believe that, regardless of the law, if we want privacy in our email, we’re going to have to start sending our email in envelopes that actually protect the contents. Such envelopes already exist, and they’ve been around for years. In fact, the aforementioned Phil Zimmerman invented one of the most famous ones. But for some reason, we hardly ever use them.

I’m talking, of course, about encrypting our email.

The whole issue of just how and when government agents can access copies of email in the hands of a third party would be far less important if all they could get out of it was a meaningless block of encrypted data.

There was a time when cryptographic software was complex and available only to the military and large corporations. But Phil Zimmerman changed all that when he released his PGP software, which featured near-military-grade encryption. Such high-quality encryption took a lot of computing power, so people were reluctant to use it. Since then, however, computing power has become dirt-cheap, and encryption has become commonplace on the internet. Every time you access a web URL that begins with “https:” your communication with that site is protected by some of the most secure encryption every created.

Yet, as a society, we don’t use encryption for email. I don’t understand it. And the fact that I don’t use it either doesn’t help me understand why it hasn’t caught on. I’ve had PGP on my computers for years, yet I doubt I’ve sent or received more than two dozen PGP-encrypted messages. And half of those were the equivalent of “I’ve just installed PGP, can you read this?”

Encryption can be a difficult technology, but we’ve solved so many other problems on the internet, so why is email encryption so hard? Why don’t more email clients support encryption? Google recently announced that they’ve turned on gmail encryption, but they’re just talking about communications between your browser and their data center; the end-to-end message is still in the clear. Why doesn’t Microsoft Outlook have built-in PGP encryption instead of a random collection of third-party certificates? Or if PGP isn’t the answer, why hasn’t a better answer emerged?

And why hasn’t more infrastructure emerged for distributing encryption keys? I have a PGP key, and you can send me encrypted email if you want. There’s a link to my public PGP key in the right-hand sidebar. But you’d have to have PGP installed, and you’d have to right-click and download the key and install it in your keyring. It’s weird: The HTML standard has a built-in tag to indicate an email address, but not a built-in way to pass along a public encryption key for that address.

I’d think a social networking site like Facebook would be great for distributing public keys, but the built-in profiles don’t include encryption keys. There’s a third-party application called Keystore that can hold a PGP key, but it has only 35 active users.

It’s a mystery to me. Why don’t more people encrypt their email? If it were up to me, everything I sent would be encrypted just on principle. But it’s not up to me, because most of the people I email haven’t given me their PGP keys. Maybe everybody else is in the same boat. But then nobody was on Facebook before everybody was on Facebook. Nobody was on Twitter until everybody was on Twitter.

Nobody encrypts their email until…it’s been almost 20 years now, so I don’t know.