Some Geeks are Creepy

Google had a bit of an embarrassing security problem recently. An engineer did a very creepy thing and spied on teenager’s Google accounts while interacting with the teens online. Apparently no laws were broken but Google, obviously, fired the engineer. Google’s statement about the incident underwhelmed Greg Laden:

Sorry Google, we are not impressed. We’d like to see an independent investigation, possible prosecution, and who knows, maybe some new laws and regulations.

The idea that we should have some new laws to make systems such as Gmail more secure is an bad idea.

Because users see technology and security as a block box they are often blindsided when there is a failure or breach of trust. Greg is right that the response from Google is inadequate for most users. The response was fine for me. After all, I understand what happened and it didn’t surprise me. The problem is that the response didn’t address the trust that was broken with most of its users who don’t understand the systems inside that black box.

I dislike, however, the suggestion that new laws and regulations should be put in place to prevent such problems in the future. Making it illegal for system engineers to open data files without permission may decrease the number of incidents, but probably wouldn’t be effective at stopping such practices with just legal punishment as a deterrent. Making it impossible for engineers to see data will mean a fundamental change in the way such systems operate. Security is always a trade-off against usability and expense. Having the government choose that balance point and force it upon Google and other service providers is the wrong response.

I’ve always tried to address such issues with user education. Users often have a black-box mentality and think that such issues are somehow automatically taken care of by the system. Users (especially managers) need to be aware of just how much power system administrators have.

I worked as a sysadmin at a college when email was first introduced to staff. I taught users the old IT adage that email was the electronic equivalent of postcards. Every employee of the post office who touches that postcard can, if they so desire, read the message. I also made it clear that I had access to anything they stored on the server (including email) and even conducted security workshops showing them how easy it was for people like me to defeat the simple encryption used in the software of the time. I tried very hard to build the trust with my users that I wouldn’t abuse that power, but wanted them to know what was possible.

Google lost some trust from its user base. The response from Google was “Why would anyone trust such a system?” In one respect they are right. Users should never have trusted such a system. I don’t, but that’s because I understand some of what is going on inside the black box after clicking the “send” button.

Perhaps Google should be leading an effort to upgrade the security of email and other messaging services, but by working with users rather than working under new government regulations. Email protocols were not designed for security. Of course the basic protocol of the Internet (TCP/IP) was not designed for secure transactions either, yet I’m confident that my online banking transactions are secure because of an end-to-end protocol called SSL/TLS*.

Users can already make their email secure using a similar system (called PGP) if they wish, but few people know how. Perhaps Google should lead the effort by streamlining the user interface and popularizing such a system. Google would need to educate the users and work with them to figure out what level of security is needed and how much effort users would be willing to put into such a system to make it work. Users may have to maintain special keys, for example, to communicate with recipients on different email systems. While Google can make that process easier, it will still require some effort on the user end to gain that extra security. There is always a tradeoff.

Pushback against such encryption, however, would come from governments. Governments around the world, for example, freaked out once they realized they couldn’t snoop on people’s Blackberry accounts. The United States government fought the introduction of PGP when it was first introduced claiming it was too dangerous to allow the technology out of the country. (Because of our government’s insistence that PGP not be provided on the Internet I had to download my first copy from an overseas server.) The US government would certainly resist any pervasive end-to-end technology that would prevent them from reading email.

Government involvement in this issue seems like a bad idea. It would force providers to choose a level of security that people may not need once they understand that email is just a digital postcard. Any government solution would also build in a government backdoor allowing them access to any secure system. In this case I really would like the government to not get involved.

* I’ll provide a brief introduction to the concept of end-to-end encryption below. Anyone not interested in how this stuff works should stop reading now.

Transactions can be made secure on an inherently unsecure system by introducing an additional protocol (set of rules) above the unsecure layer providing a “session” that encrypts information before the unsecure protocol and only decrypts that information after the data goes beyond the unsecure protocol at the other end. Hence it’s an “end-to-end” system and doesn’t rely upon unsecure devices in the middle of the route taken by the data.

For example, the Internet uses an unsecure protocol called TCP/IP to get information from one computer to another, let’s say from your home computer to your bank. Rather than redesigning the unsecure protocol it is better to add an end-to-end encryption/decryption system “above” the unsecure protocol. When your computer talks to the computer at the bank it uses a system called Secure Sockets Layer / Transport Layer Security (SSL/TLS) to accomplish this.


The green lines represent information that can be read since it is not encrypted (plaintext) The red lines represent the encrypted information (ciphertext) that no one can read. We don’t really know what is happening to the information in the blue lines, but we don’t care since it’s already been encrypted.

If you are not using an encrypted email client (most of the world does not) your message may still be encrypted in the same way as your bank information, but that is not end-to-end for an email message since there is a third party involved (the email recipient). Your message may be encrypted below the email client as your bank password was, but it will be decrypted before it gets to the email server where it is stored unencrypted, until the recipient asks for it from their email client. This means your email cannot be read by anyone eavesdropping somewhere in the Internet (what is called a man-in-the-middle attack), but it can be read by anyone with access to the file on the email server.

A program such as Pretty Good Privacy (PGP) can work with an email client to encrypt a message before your computer sends it to an email server. Your message will stay encrypted, even on the email server, until a similar program decrypts the message at the email client on the other side. This allows for end-to-end encryption even when messages are stored on servers awaiting delivery and the messages will stay encrypted in all locations other than at the sender’s and recipient’s email programs.


Other messaging systems, such as SMS or chat rooms, can be designed to work the same way.

Leave a reply