I don’t know anything else about Representative Ted Lieu, but I do like what he said after hearing law enforcement types testify that the American people not be allowed to use strong encryption:

It’s a fundamental misunderstanding of the problem. Why do you think Apple and Google are doing this? It’s because the public is demanding it. People like me: privacy advocates. A public does not want a an out of surveillance state. It is the public that is asking for this. Apple and Google didn’t do this because they thought they would make less money. This is a private sector response to government overreach.

I’ve made a similar point myself.

Then you make another statement that somehow these companies are not credible because they collect private data. Here’s the difference: Apple and Google don’t have coercive power. District attorneys do, the FBI does, the NSA does, and to me it’s very simple to draw a privacy balance when it comes to law enforcement and privacy: just follow the damn Constitution.

And because the NSA didn’t do that and other law enforcement agencies didn’t do that, you’re seeing a vast public reaction to this. Because the NSA, your colleagues, have essentially violated the Fourth Amendment rights of every American citizen for years by seizing all of our phone records, by collecting our Internet traffic, that is now spilling over to other aspects of law enforcement. And if you want to get this fixed, I suggest you write to NSA: the FBI should tell the NSA, stop violating our rights. And then maybe you might have much more of the public on the side of supporting what law enforcement is asking for.

More than that, you need to create an accountable process, and you need to create a credible deterrent to the misuse of surveillance powers. It has to be something more transparent and believable than the usual promises of internal safeguards that we have no way to evaluate. Monitor the process, report violations to Congress, send violators to Leavenworth. If you’re not willing to punish law enforcement officers for violating our privacy, then your claim to respect our privacy isn’t for real.

Then let me just conclude by saying I do agree with law enforcement that we live in a dangerous world. And that’s why our founders put in the Constitution of the United States—that’s why they put in the Fourth Amendment. Because they understand that an Orwellian overreaching federal government is one of the most dangerous things that this world can have.


Ever since Edward Snowden told us all about the NSA’s rampant spying on Americans, I’ve been meaning to convert Windypundit to an encrypted site, and I think I finally did it. If all is working, you should be seeing “https:” in front of “” up there in the address bar.

(You might not see the little lock symbol, however, depending on your browser. That’s because the images in my Amazon ads widget are being served unencrypted by Amazon. In theory, those images could be intercepted and altered in transit, so your browser is letting you know that you’re looking at mixed content, some of which is not strictly secure. Apparently Amazon ads are infamous for ruining secure pages this way.)

It’s not that I need the security. The whole point of a blog like this is to share everything on the site with literally anyone who wants to see it. In fact, I’ve gone through rather a lot of trouble to make sure that happens. Ask the server for a page, and ye shall receive it.

My reason for adding encryption is really just to make a small contribution toward gumming up the workings of the surveillance state. This page traveled to your browser as one more secure data stream on the net — random bits for all practical purposes, except to you and me. There’s nothing worth spying on here, but only you and I can be sure of that. It’s one more thing that intelligence and law enforcement agencies can’t read, one more thing to waste their time, one more thing to discourage them from trying.

Encryption disguises the internet’s valuable data in the hiss of (pseudo-) random noise. Spying on the internet takes work, and that work pays off because the data is there to find. But it doesn’t have to be that way. We can make it harder for them to spy on us, and that will make it less worthwhile for them to try.

Be the noise.

Illinois recently passed an anti-bullying law that some school districts are interpreting as granting some disturbing powers, according to a story at Motherboard by Jason Koebler:

School districts in Illinois are telling parents that a new law may require school officials to demand the social media passwords of students if they are suspected in cyberbullying cases or are otherwise suspected of breaking school rules.

The law, passed as HB 5707, apparently does not specifically say schools can demand student social media passwords, but school districts appear to be acting under the belief that it interacts with a bill from the previous year.

That law states that elementary and secondary schools must notify parents if they plan to ask for a password, and that it can be asked for if a student violates policy. The cyberbullying law codifies the idea that Facebook harassment is a violation of code policy, which is why you see these letters popping up.

I only glanced at the legalese briefly, but it seems to me that most of the violence to student privacy is done by the earlier law, with the more recent bill only adding the bullying element.

Edwin C. Yohnka, Director of Communications and Public Policy for the Illinois ACLU, tells me the anti-bullying bill (which his organization supported) doesn’t actually give schools this power:

The story about the single downstate school district was, of course, disturbing. That said, that school district badly misread their authority under the new Illinois law. As the sponsor of the measure made clear in several public statements this week, the intent of the law was never to permit school districts to gather social media passwords for students. Those social media accounts would, as one might expect, include lots of personal information that the school district should not be accessing.

That hasn’t stopped Triad Community Schools Unit District #2 from putting on the jackboots:

Leigh Lewis, superintendent of the Triad district, told me that if a student refuses to cooperate, the district could presumably press criminal charges.

“If we’re investigating any discipline having to do with social media, then we have the right to ask for those passwords,” she said.

“I would imagine that turning it over to the police would certainly be one way to go. If they didn’t turn over the password, we would call our district attorneys because they would be in violation of the law,” she added. “That would only be in some cases—we’d certainly look at the facts and see what we’re dealing with before we make the decision.”

This is wrong on so many levels.

If a student is using social media to send bullying messages, investigators can read the messages from the victim’s account. And if they’re private messages that aren’t part of the harassment of the victim, then officious meddlers like Leigh Lewis have no business reading them without a warrant.

I don’t know why they think they can treat social media differently from other communication methods. If students used the U.S. Mail to send letters (as your grandparents did in olden times) schools wouldn’t be allowed to read them. Nor would they be allowed to tap students’ phones without a warrant. But somehow they think social media is different. Somehow they think digital communication is less protected. (And legally speaking, it probably is. Which is something we ought to change.)

I’ve been living life in the digital world for over thirty years, and the idea that someone could be forced to give their password to a total stranger feels like an incredible violation. The gall of these bastards. They say they’re just interested in bullying, but they want to see everything.

To a heavy user of a social media site like Facebook, letting a stranger use your account is like letting a stranger into your home. It’s like letting a stranger rifle through your wallet, dump out the contents of your purse, and paw through your underwear drawer. They’ll be able to read every message your child sent to everyone they know. In addition, they’ll also get to read every private thing that your child’s family and friends shared with them in confidence, thus violating the privacy of innocent bystanders.

With the password, school district officials could gain access to anything any of you might share over the social network — private thoughts your children shared with friends, information about medical or psychological problems of family members, titillating details of your child’s sexual experimentation, what you really think of some of your kid’s teachers, your off-the-cuff comments about your boss, a photo of that time you let them drink a beer with you, passwords to other computer systems that someone sent in a message — the list goes on and on.

And once they have the password, they will be able to assume your child’s virtual identity. (It’s probably not legal, but who would stop them?) They can delete stuff they don’t like, they can interrogate your child’s friends in the guise of your child, they can ask family members for sensitive information. Furthermore, major social media credentials often serve to control access to other web sites (e.g. “Login with your Facebook account”), giving them God-only-knows how much access into your child’s life.

I’m seething with anger over the depth of this violation of privacy. I don’t have kids, but my gut reaction is that you should respond to a school’s password demand as if they were demanding to see nude pictures of your child. If that means you punch them hard in the face again and again until they go down, and then kick them in the ribs until they cough up blood, all while reciting Jules’s “Lay my vengeance upon thee!” speech from Pulp Fiction…that would be wrong. Don’t do that. It’s a very bad idea. (But have I mentioned that this makes me angry?)

My gut also says you should tell your kid to borrow a legal strategy from Saul Goodman and repeat this key phrase: “That’s not my social media account.” Or say they can’t remember the password. Or give up the password and then change it a minute later “for security reasons.” Or give control of the account to someone outside the jurisdiction. Or…

Sadly, those are also bad ideas. Technical hacks of the law don’t work very well in the real world legal system. They may sound clever, but judges don’t have much appreciation for clever. They tend to see it as contempt or obstruction. Don’t do anything clever.

But if you have the resources, and some school authorities try to pull this shit, don’t let them get away with it. If some school employee tries this crap on you, call a lawyer. There are serious Constitutional issues here, so you might be able to get a public interest law firm to take it on for free.

But…if this makes you as angry as it does me, and you’ve just got to do something…I think there’s a pretty good argument that people with no respect for your privacy have no grounds to complain about theirs. If some school official forces your child to give up their privacy, don’t keep it a secret. Call them out on that shit. Name names. Give out contact information. To get you started, the contact page for the Triad school district is here.

(Hat tip: Robby Soave.)

So a couple of days ago I was explaining why Orin Kerr was wrong about Apple’s new policy of rendering themselves unable to encrypt customers’ iPhones, and in passing I linked with some disdain to a piece by former FBI Assistant Director Ronald T. Hosko, who was claiming, of course, that the new policy would help the bad guys.

Yesterday, however, Hosko did something that none of the anti-privacy alarmists at the NSA have ever been able to do: He gave an actual example of someone who would have been harmed by Apple’s policy. He did this in a post for the Washington Post‘s blog PostEverything titled something like “I helped save a kidnapped man from getting killed. With apple’s new encryption rules, we never would have found him.”

It was a dramatic way to make his point. It’s one thing for people like me to go on about abstract concepts like privacy rights, but I don’t have the burden of helping save the life of actual kidnap victims. In the face of Hosko’s story, the privacy argument becomes a lot harder to make. I suppose if I wrote a full response to Hosko’s piece, I would have to reiterate the dangers of a brittle security system, I would talk about the horrors of living in an all-seeing totalitarian police state, and I would point out that law enforcement officers are not free of trustworthiness issues.

The trustworthiness problem is especially relevant. You may notice I didn’t give you a link to Hoska’s article. That’s because in the time since it was originally posted, the title has been changed to “Apple and Google’s new encryption rules would make law enforcement’s job much harder,” and this note has been added at the bottom:

Editor’s note: This story incorrectly stated that Apple and Google’s new encryption rules would have hindered law enforcement’s ability to rescue the kidnap victim in Wake Forest, N.C. This is not the case. The piece has been corrected.

As near as I can tell from the rewrite, Hosko was a little confused, and it turns out the FBI got all the information they needed from the carrier, not the phone itself.

So, maybe I’ll write that longer response some day. But for now, I think I’ll just take this as an illustration of why I’m not really ready to trust these people when they say they need access to my personal data.

Apple has announced that with the new iOS 8 release they are no longer able to comply with law enforcement warrants to decrypt the contents of iPhones and iPads.

On devices running iOS 8, your personal data such as photos, messages (including attachments), email, contacts, call history, iTunes content, notes, and reminders is placed under the protection of your passcode. Unlike our competitors, Apple cannot bypass your passcode and therefore cannot access this data. So it’s not technically feasible for us to respond to government warrants for the extraction of this data from devices in their possession running iOS 8.

As soon as I heard about this, I figured it would provoke outrage from the usual quarters, invoking the standard list of villains. Terrorists! Drug dealers! Child pornographers! Oh My! Here’s the first example I found:

Ronald T. Hosko, the former head of the FBI’s criminal investigative division, called the move by Apple “problematic,” saying it will contribute to the steady decrease of law enforcement’s ability to collect key evidence — to solve crimes and prevent them. The agency long has publicly worried about the “going dark” problem, in which the rising use of encryption across a range of services has undermined government’s ability to conduct surveillance, even when it is legally authorized.

“Our ability to act on data that does exist . . . is critical to our success,” Hosko said. He suggested that it would take a major event, such as a terrorist attack, to cause the pendulum to swing back toward giving authorities access to a broad range of digital information.

So Hosko went with “terrorists.” I will leave finding examples mentioning drug dealers and child pornographers as an exercise for the reader.

I’m not too concerned about about the general outrage (yet), but I do want to address the concerns raised by Orin Kerr, because they are more thoughtful than the usual law-and-order hysterics, and because they are wrong and dangerous to civil liberties.

If I understand how it works, the only time the new design matters is when the government has a search warrant, signed by a judge, based on a finding of probable cause. Under the old operating system, Apple could execute a lawful warrant and give law enforcement the data on the phone. Under the new operating system, that warrant is a nullity. It’s just a nice piece of paper with a judge’s signature. Because Apple demands a warrant to decrypt a phone when it is capable of doing so, the only time Apple’s inability to do that makes a difference is when the government has a valid warrant. The policy switch doesn’t stop hackers, trespassers, or rogue agents. It only stops lawful investigations with lawful warrants.

That’s just not true. I think Orin is probably an honorable guy, but he’s repeating a lie that a lot of people would like you to believe. The truth is that anything that Apple does to protect our data from the government also protects our data from malicious people inside Apple itself. After all, in order for Apple to be able to decrypt our iPhone data for the government, Apple has to be able to decrypt our iPhone data.

In order to do that, Apple has to have people somewhere within its organization who have access to software and cryptography keys that can crack iPhone encryption, which makes it possible that someday an employee could walk out of Apple headquarters carrying a MacBook full of software that can break the security on half a billion iPhones.

In addition, Apple having the ability to crack its phones’ security creates a brittle break of iPhone security. It’s like putting an elaborate $1000 electronic lock on every door in an office building and keeping the keycard programmer in the building superintendent’s office. Anyone with the burglary skills to break in to the super’s office can ransack the rest of the building with ease. And anyone who gets a hold of Apple’s iPhone cracker can read every iPhone in the world.

That sort of high-value target is very tempting for hackers. And when I say hackers, remember that it’s not just rebellious college kids working out of their dorm. Commercial hacking is a serious criminal enterprise, run by the same kinds of people that run drug smuggling rings and extortion rackets. Making matters worse are the various national intelligence agencies in places like Russia, China, and Iran that might find it worthwhile to spend tens of millions of dollars on a technical and human intelligence program to compromise iPhone security, and the security of everything we can reach from our iPhones. And since plenty of foreigners use iPhones, I wouldn’t be surprised if the NSA has already stolen the keys from Apple.

Apple’s design change [is] one it is legally authorized to make, to be clear. Apple can’t intentionally obstruct justice in a specific case, but it is generally up to Apple to design its operating system as it pleases. So it’s lawful on Apple’s part. But here’s the question to consider: How is the public interest served by a policy that only thwarts lawful search warrants?

I think I’ve explained quite well how that public interest is served, because Apple’s changes don’t just thwart lawful search warrants, they also thwart malicious hacking and bad actors inside Apple. Once you remove this false assumption, Orin Kerr’s post falls apart.

Orin’s argument worries me for another reason, however, because he frames the issue in a way that is dangerous for the future of privacy. For example, at one point, this is how he responds to the argument that there are technical alternatives available to law enforcement even with Apple’s changes:

These possibilities may somewhat limit the impact of Apple’s new policy. But I don’t see how they answer the key question of what’s the public interest in thwarting valid warrants. After all, these options also exist under the old operating system when Apple can comply with a warrant to unlock the phone. And while the alternatives may work in some cases, they won’t work in other cases. And that brings us back to how it’s in the public interest to thwart search warrants in those cases when the alternatives won’t work. I’d be very interested in the answer to that question from defenders of Apple’s policy. And I’d especially like to hear an answer from Apple’s General Counsel, Bruce Sewell.

You know what? I don’t give a damn what Apple thinks. Or their general counsel. The data stored on my phone isn’t encrypted because Apple wants it encrypted. It’s encrypted because I want it encrypted. I chose this phone, and I chose to use an operating system that encrypts my data. The reason Apple can’t decrypt my data is because I installed an operating system that doesn’t allow them to.

I’m writing this post on a couple of my computers that run versions of Microsoft Windows. Unsurprisingly, Apple can’t decrypt the data on these computers either. That this operating system software is from Microsoft rather than Apple is beside the point. The fact is that Apple can’t decrypt the data on these computers is because I’ve chosen to use software that doesn’t allow them to. The same would be true if I was posting from my iPhone. That Apple wrote the software doesn’t change my decision to encrypt.

This touches on another thing that Orin seems to miss, which is that Apple’s new policy is not particularly unusual. In situations that demand high-security, it’s kind of the industry standard.

I’ve been using the encryption features in Microsoft Windows for years, and Microsoft makes it very clear that if I lose the pass code for my data, not even Microsoft can recover it. I created the encryption key, which is only stored on my computer, and I created the password that protects the key, which is only stored in my brain. Anyone that needs data on my computer has to go through me. (Actually, the practical implementation of this system has a few cracks, so it’s not quite that secure, but I don’t think that affects my argument. Neither does the possibility that the NSA has secretly compromised the algorithm.)

Microsoft is not the only player in Windows encryption. Symantec offers various encryption products, and there are off-brand tools like DiskCryptor and TrueCrypt (if it ever really comes back to life). You could also switch to Linux, which has several distributions that include whole-disk encryption. You can also find software to encrypt individual documents and databases.

If you use another company to store your data in the cloud, you can use encryption to ensure that they can’t read what they’re storing. Your computer would just encrypt files before uploading then, and then decrypt them when retrieving them.  For example,  EMC’s Mozy backup gives you the option of letting the service do the decryption or doing it yourself with a private key, as do Jungle Disk and Code42 Software’s Crashplan encrypted backup. Dropbox doesn’t offer client-side encryption, so they can read the data you send them, but there are third-party tools such as SafeMonk that run on your computer and encrypt the data before Dropbox ever sees it.

I guess the point I’m trying to make is that it’s not Apple’s data, and it’s not Apple that makes the decision to encrypt the data. It’s our data, and we decide whether to encrypt it or not. Apple is just one of several companies that supply the tools we use to do that.

Orin Kerr’s viewpoint seems to elevate Apple’s participation in the process, to treat Apple as somehow responsible for preserving law enforcement access to data that is not even in its possession. That’s not a model I’m comfortable with as the basis for legislation. I don’t want to normalize the idea that the providers of our information tools are obligated to subvert those tools because it makes the government’s job easier.

Orin suggests that might be a possibility:

The most obvious option would be follow the example of CALEA and E911 regulations by requiring cellular phone manufacturers to have a technical means to bypass passcodes on cellular phones. In effect, Congress could reverse Apple’s policy change by mandating that phones be designed to have this functionality. That would restore the traditional warrant requirement.

CALEA is bad enough in requiring carriers to have the technological ability in place to allow law enforcement agencies to tap telephone and internet traffic traversing the carriers’ networks. What Orin is suggesting (although not advocating) goes far beyond that, by requiring computer systems manufacturers to intentionally subvert their customers’ information security, even if unlike the CALEA scenario, the customer’s information never leaves the customer’s hands. It seems like a slippery slope that could eventually lead to a requirement for every electronic device in our lives to be able to spy on us at the government’s request.

As for restoring the “traditional warrant requirement,” my understanding is that a warrant allows the government to intrude on someone’s privacy to gather evidence. But can a traditional warrant be used to compel a third party to intrude on someone’s privacy? If the government gets a warrant to plant a bug to hear what my wife and I talk about at home, they might ask a locksmith to help them break into my house, but could they use that warrant to force the locksmith to help them? If they want to test my blood for drugs, can they use a warrant to force the nearest doctor to draw my blood and the nearest lab to test it? If they want to surveil a suspect, can a judge order me to grab my camera and take pictures of him?

(For that matter, I don’t quite understand how the government can force Apple to decrypt a phone. I’m guessing that it’s because Apple has some special cryptographic key that makes it easier, and it’s less destructive to privacy for Apple to decrypt a phone than for Apple to turn that key over to the government, but I could be totally wrong.)

Frankly, I’m not convinced that the “traditional warrant requirement” is applicable to encrypted data. Search warrants have always been about the government’s authority to search, but given enough manpower, equipment, and time, the government’s physical ability to conduct the search has never been an issue. The agents of law enforcement have always been able to knock down every door, rip open every wall, and break every box.

Until now.

Modern strong encryption is effectively unbreakable with current technology. Securely encrypted data can only be read by someone who has the decryption key. And if every copy of the decryption key is destroyed, nobody will ever be able to read that data again. (Not using current technology. Not before the stars burn out.) It’s like some sort of science fiction scenario where the data is sealed off in another dimension.

So what should happen to the government’s authority to break every box when someone invents an unbreakable box? It’s not clear to me that the solution is, or should be, requiring the makers of unbreakable boxes to build in secret levers to open them.

(Hat tip: Scott Greenfield)

So, if I’m following the story right, it appears that someone managed to hack into a bunch of cell phones or iCloud accounts or something belonging to celebrities and find a bunch of nude photos, which they then apparently dumped on the internet.

I haven’t looked at any of these photos because (1) I already know how to find all the nude photos on the Internet that I’ll ever want to see, (2) even if I had the hots specifically for one of those actresses, the nude photos were supposed to be private, so I’d feel weird looking at them, and (3) even though I’m blogging about the photos now, I have no news-related reason to take a look at them.

The responses to the nude photo dump have been pretty typical. In particular, some people have pointed out that the surest way to avoid having nude pictures of yourself on the internet is to not take nude pictures of yourself. That drew the usual accusations of blaming the victim instead of and blaming the hacker who stole the pictures. This is pretty much the same set of responses we’ve seen with revenge porn. It’s a weird dynamic that I have trouble following, and I’m not quite sure what to say about it.

What I will say, however, to all my friends, family, acquaintances, blog readers, and Twitter followers, is that if any of you have some nude photos of yourself, and if those photos somehow get posted on the Internet, I won’t respect you any less.

For one thing, my libertarian leanings do not just apply to public policy; I’m also somewhat libertarian in my approach to culture and society. I’m really not going to get all judgmental about whatever you and another consenting adult (or two or three) choose to do in private or for willing viewers.

Also, and this may actually be the more important factor, I’ve been on the Internet a long, long time, and by now I’ve seen rather a lot of pictures of naked people — models, actors and actresses, porn stars — doing all kinds of different things. I mean, I was downloading GIFs of naked ladies in the early 1990s, before the World Wide Web was invented, and way before most of you ever heard of the Internet.

I say this not to highlight the shallowness of my life, but to explain why naked people on the Internet…just don’t seem like a big deal. Also, I’ve recently become interested in the sex worker rights movement, and I follow a bunch of sex work activists — strippers, prostitutes, dominatrixes, porn stars — many of whom post nude or semi-nude pictures all the time.

The point is, if I started seeing pictures of most of you naked…I probably wouldn’t even notice.

I’m on the record as not having been impressed with Bradley Manning for turning over thousands of classified diplomatic messages to Wikileaks:

Then, of course, there’s the anonymous asshole who was trusted with access to all this stuff and decided to leak it. Leaking this stuff might have been justified if it contained the shocking truth behind the Kennedy assassination, or proof that 9/11 really was an inside job, or the alien autopsy video, but most of this stuff is routine diplomatic traffic.

Look, whoever you are, you took an oath to keep this stuff secret. People trusted you. Then you broke your oath and leaked it anyway. That ain’t cool.

(At the time, Manning had been mentioned as a suspect, but it wasn’t clear to me that he was the source of the Wikileaks material, so I didn’t use his name.)

To the best of my knowledge — which isn’t very great — there’s still nothing that Manning gave to Wikileaks that justifies his breach of trust. The leaked messages revealed a lot of details about our foreign policy, and about what our diplomats thought of other country’s rulers, but little of it seemed directly relevant to how we Americans live our lives. (Since I haven’t been following the story very closely, however, I’m willing to have my mind changed about that, and it wouldn’t take anything as outrageous as my hyperbolic examples.)

Of course, even if his disclosures were unjustified, he still doesn’t deserve the atrocious treatment he was reportedly subject to, especially since most of it took place while he was still innocent until proven guilty. But that’s another story…

Edward Snowden took an oath too, and he broke it as well, but my initial impression is that he did it for a justifiable reason. It appears the government has been spying on us far more than we thought, and even if it all does turn out to be “legal,” it’s still something that affects us all. We have a need to know.

Heck, this could be something that affects me. Just of the top of my head, I’m friends on Facebook with Mirriam Seddiq, and I follow her on Twitter. Given that she’s an immigrant from Afghanistan and an immigration lawyer, I wouldn’t be surprised to learn that she’s brushed up against somebody the NSA has taken a look at. And Snowden was claiming that PRISM looked at friends of friends of targets…

Over at a public defender, Gideon is talking about a disturbing new ruling from the 6th Circuit:

Law enforcement and cops have been using cell tower data to pinpoint the location of a cell phone (and by extension its user) for a few years now, but this was mostly done post-hoc, to prove that a particular individual was at a particular location at the time of the crime. I’m also fairly certain that prosecutors and cops have been getting warrants to track cell phones in order to locate an individual they are chasing.

But can all of this be done without a warrant? Is there a reasonable expectation of privacy in the location signal of your phone? Is this something that society today is prepared to accept? That one doesn’t generally expect someone to know where you are based on the contact your cellphone has (covertly and unbeknownst to you) with a cell phone tower and the cell phone company?

That’s what the 6th Circuit just said in a decision [PDF] released two days ago: that there is no reasonable expectation of privacy in that information and thus, no need to get a warrant in order to conduct surveillance.

I didn’t realize it at first, but on reflection, that’s one of the most frightening anti-privacy decisions I’ve ever heard about, because it’s about data. I mean, sure it’s a bad thing that cops can, for example, stop and frisk people for no good reason, but at least stop-and-frisk is limited by the supply of police officers. No such limits apply to location queries on the cellular networks: If they can track one of us without a warrant, then there’s nothing to stop them from tracking all of us.

(Don’t fall for the claim that there would be “too much data” to track us all. That’s a familiar argument, and it’s wrong. We live in an age of cheap and ubiquitous computing. You can rent a cloud of processors with a credit card, and fault-tolerant distributed database software for problems of this size is available for free.)

The court basically said that following a cell phone is no different than using a dog to track someone’s scent, so no warrant is necessary. Gideon quotes the Cato Institute’s response to that:

But it does not follow at all. “What a person knowingly exposes to the public, even in his own home or office, is not a subject of Fourth Amendment protection,” the Supreme Court explained in the seminal case of Katz v. United States, “But what he seeks to preserve as private, even in an area accessible to the public, may be constitutionally protected.” Any member of the public can buy a dog and follow a scent. Any member of the public can view and copy down a license plate number. Any member of the public can view the external paint job of a car. But any member of the public cannot just track the GPS signal of a random cell phone–and if they could, most of us would be extremely wary about carrying cell phones. Unlike all these other examples, GPS tracking as employed here depends crucially on the ability of police to invoke state authority–a seemingly salient distinction the court fails to take any note of.

That’s a good explanation of my own objection to these kinds of searches. A lot of recent weakening of our 4th Amendment rights has been justified on the grounds that we have no expectation of privacy in information that we store with third parties. But it’s the government that is using its unique powers to force those third parties to turn over that information. And when the government uses those powers, it seems to me that should trigger Constitutional protections. That’s why we have rights; to limit government power over us.

Everyone is all aflutter about the news that Steve Jobs knows where you have been. Since that Earth-shattering bit of news, a lot of bloggers and reporters have pointed out how other software within the iPhone can do the same thing without the user realizing it, and how the Android devices do this as well. Greg Laden has a good summary of these articles in his post iKnowwhatyoudidlastsummer.

To be blunt, people being tracked in their everyday lives is nothing particularly new. I’m happy that this has made a splash in the mass media since it’s a situation that has been increasing in prevalence without major notice until now. When I teach IT security, I always spend some time covering privacy issues as well, and have discussed tracking issues regularly for fifteen years now.

A common thought problem I would often give to my students is to plan a cross country road trip in such a way that they could not be tracked. Fifteen years ago this was an interesting problem that forced people to think about how they interacted with a variety of databases. Today, it’s a more difficult proposition to even accomplish.

Even before the advent of modern smart phones, people have been automatically tracked. When you use your debit or credit card, the bank has a primitive tracking record of your movements. The more you use it, the better the tracking. So, before leaving on a hypothetical un-tracked trip, you need to remember to leave these cards at home. You will need to work with cash. If you don’t want to tip your bank off to your trip, you need to collect the cash in advance, a little at a time. It may also be a good idea to give your cards to a trusted friend so there is local activity on them while you are away, electronically geo-tagging you to your home town.

You can’t just leave your smart phone at home; you will need to leave any cell phone behind. Cell phones have been tracked since the very first cell phone. Cell phones work by having the towers (and thus cell companies) track the phones. When you first turn on your phone, it sends a message out. Any nearby towers receive that signal, which then talk to a computer at the company. The tower with the strongest signal (as well as reasonably bandwidth, consistent signal, and other factors) will be granted sole authority over your phone. This process is periodically repeated in case you move. The cell company must always know which tower to direct a call through to get to your phone.

Ten years ago the cell companies swore to us on a stack of their own quarterly reports that this tracking data was not stored in any reasonably permanent way due to the amount of data and cost of storage. I haven’t heard much about this as the cost of storage has plummeted, but I was always leery of the argument since it was based upon no compression of data that is easily compressed anyway. After 9/11 there was a lot of discussion about phone companies not destroying data that had been previously been destroyed. The problem now, of course, is finding out what data is actually stored today since that information is considered national security.

The difference with a modern smart phone is the introduction of a GPS chip that can provide better accuracy of your location. Still, accuracy of tower-only location services has gotten so good that several years ago governments began requiring cell phone companies to upgrade all of their towers so they can triangulate your position (using signals from multiple towers) to better coordinate emergency response when you call 911. While this works great when you get into an accident and want the government to find you, but it also means you can be tracked at all times to a surprising level of accuracy.

So, you will need to stop your phone from even communicating with a cell tower even if it’s not a smart GPS-enabled phone. You can turn it off, but I never trust computers that have to monitor for a key press to be truly “off”. You can remove the battery (assuming that’s an easy thing to do). You could tightly wrap the phone in aluminum foil, then drop it in a Mylar bag. Or, I suppose, you could drop it in a river and walk away, which is probably the most satisfying way to stop a cell phone from tracking you.

Now, ready for your trip? Not quite yet.

Does your car have a tracking device and cell phone secretly stashed away behind a door panel? If it does, it may not mean you have an enemy agent in a black helicopter tracking your every move, it may just mean you have OnStar, or a similar system, installed by the auto manufacturer. That system is, basically, a tracking device attached to a cell phone integrated with your car’s computer system. You should be able to locate the fuse which powers that module and remove it, or, if you are really paranoid, dismantle the panel it’s mounted under and chuck it into the same river as your cell phone.

Now it’s time to plan your route, and this is where things get complex.

If you live in a major city, especially Chicago or London, it can be difficult to find a route out of town where your license plate will not be recorded as you pass through an intersection. Many early red-light cameras would only take pictures when triggered by sensors, yet simple observation shows that such sensors are often triggered even when no one is running a light, such as when people turn right on red, or go over a sensor when turning left. In addition to that, many intersections now have cameras that simply record all traffic flow at all times. You need to avoid all such intersections.

The camera problem is made worse by projects such as the Chicago OEMC initiative which links private cameras into the Chicago Office of Emergency Management and Communications system for recording and monitoring. Even if you trust that your local 7-11 will destroy its security recordings, those same recordings may be saved by the government automatically.

On your trip toll roads, obviously, are a very bad idea. Even if you threw your toll authority Radio Frequency ID transceiver into the same river after your cell phone, cameras record every license plate passing through every toll plaza. By the way, if you ever want to prove your spouse was cheating on you, or they are a bad parent by working too late, you can subpoena their toll records for evidence.

Off the toll ways (and major expressways which may have traffic cameras, though the older systems don’t have the resolution for picking up license plates), you need to be careful about any city, town or county you pass through with cameras. They are now so prevalent, you most likely need to do scouting trips to find a clear route.

Once you have arrived, you may be able to walk around anonymously for now. If it’s in a big city, you can leave your car somewhere (Where? That’s another problem) and use taxis. At the moment you don’t really have to worry about automatic facial identification too much. While the technology is certainly impressive, unless someone has a good picture of your face and is specifically looking for you, such system won’t be a help. They can find matches for specific people, but, as of yet, can’t just identify all people passing in front of them.

One last piece of advice is to make sure you don’t use your supermarket loyalty card when buying an apple in your destination city. Of course loyalty cards are a whole new privacy problem in themselves.

Ready for the return trip or do you just want to follow your cell phone into the river?

A California Court of Appeals judge recently ruled in People v. Lieng that there’s no constitutional problem with police using night vision goggles to see things that they couldn’t otherwise see. In Kyllo v. United States the Supreme Court had ruled that police could not use a thermal imaging device without a warrant, and you’d think the same rule would apply here, but it doesn’t. The court’s two-part explanation for this is entertainingly bizarre.

Consider the first part:

Kyllo is inapplicable to this case.  First, night goggles are commonly used by the military, police and border patrol, and they are available to the general public via Internet sales…More economical night vision goggles are available at sporting goods stores…Therefore, unlike thermal imaging devices, night vision goggles are available for general public use.

[citations elided]

Scott Greenfield explains part of the problem with this reasoning in a post titled “The Amazon Exception.”  (In this excerpt, Scott calls night-vision goggles “nogs” because someone told him that’s what all the cool kids are calling them.)

That nogs are used by the military, police and border patrol, fails to impress.  Lots of technology is used by government agents. Much of it is used to do nasty stuff that would, in the absence of a warrant, violate the Constitution.  So what?

But the kicker is that it’s “available to the general public via internet sales.”  Now it’s getting interesting. When courts rely on the inventory at Amazon, or perhaps more obscure websites, for the scope of the 4th Amendment, there might be a problem.

No kidding there might be a problem. In this country, we supposedly have something called rule-of-law, which means we are not subject to the arbitrary whims and favors of despots and bureaucrats, but rather all people are held to a set of laws that are known in advance. But if the constitutionality of a search depends on something as vague as whether the tools used are “available to the general public,” then who can know what the law means? Almost everything is available to the general public if they’re willing to make some amount of effort, so who could possibly predict when a court might take notice?

Or as Scott says,

Rather than research the caselaw to determine whether police use of technology constitutes an unlawful search under the Fourth Amendment, we should begin our inquiry on Amazon. Is that the point?

Then there’s the second part of the court’s justification. Because I’m a science geek, I find it even more troublesome than the first part:

Second, state and federal courts addressing the use of night vision goggles since Kyllo have discussed the significant technological differences between the thermal imaging device used in Kyllo, and night vision goggles…Night vision goggles do not penetrate walls, detect something that would otherwise be invisible, or provide information that would otherwise require physical intrusion…The goggles merely amplify ambient light to see something that is already exposed to public view…This type of technology is no more “intrusive” than binoculars or flashlights, and courts have routinely approved the use of flashlights and binoculars by law enforcement officials.

The way this is written, the statement that “night vision goggles do not penetrate walls…or provide information that would otherwise require physical intrusion” seems to imply that thermal imaging does both of those things. As a matter of physics, that’s just not true. Thermal imaging cannot see through walls.

What thermal imaging can do is tell you the temperature of those walls, which may give you some idea of what’s on the other side. Put a heat source in a room, and the room will warm up. That will warm the inner surface of the room’s walls, and some of that heat will leak through the walls to heat the outer surface of the building. Then, like everything else in the universe that has a temperature, the outer surface of the building will give off electromagnetic radiation.

The spectrum of that radiation–the portion of energy given off at various frequencies–depends mostly on the temperature of the radiating object. Sufficiently hot objects–usually around 900 degrees F°–give off electromagnetic radiation at frequencies high enough for humans to see–visible light–and the object appears to be glowing a faint red. Objects that are even hotter will give off other colors of the spectrum until you see an even mix of colors, meaning the object glows white hot.

Cooler objects give off light (electromagnetic radiation) that has frequencies too low to be detected by the human eye. We call this light infrared, meaning “below red.” Infrared light behaves a lot like ordinary light, except that you just can’t see it. And, just like ordinary light, it can’t go through walls.

Getting back to the subject of this post, thermal imaging systems work by using electronic sensors to detect the low-frequency infrared light emitted from warm objects. The data from the sensor is used to create an image that is displayed to the user. Night vision systems, on the other hand, detect light that is in the visible part of the spectrum, but they use a sensor mechanism that can create an image from far less light than the human eye needs. Thus the main difference between the two technologies is that night vision works on light that is too dim for humans to see, whereas thermal imaging works on light that is the wrong frequency for humans to see.

That doesn’t seem like a distinction important enough for a constitutional right to hinge on, but it makes more sense than what the judge wrote.

On the other hand, perhaps because I think of this too much in terms of the physics, I’ve never had a clear understanding of the principles by which the courts have ruled that thermal imaging requires a warrant. Why should police need a warrant to examine energy emissions that a suspect is allowing to just radiate away? If the subject is standing in his house and yelling about his drug grow operation at the top of his lungs, should the police get warrant before they’re allowed to stand outside the house and listen? If not, then why should they need a warrant to detect infrared emissions outside the house?

If you don’t want people to know about your drug-growing business, you should control your infrared emissions. Don’t let your house radiate infrared energy through the air, where it could strike a sensor being held by a cop who’s sitting in his car on the street. You’re essentially sending signals to anyone with a receiver, so how is that an intrusion on your privacy?

Note that we can still rule out surveilance technologies that are intrusive–x-rays, penetrating radar, megnetic resonance–on the grounds that they involve sending something inside private property. They’re the logical equivalent of a cop standing outside your house and using a long stick to reach in a window and poke around in your belongings, which I assume would require a warrant just as if he had entered.

The basic distinction is that the police can use passive technology to monitor emissions passively, but they can’t actively send anything into an area they’re not allowed to enter themselves.

This particular way of thinking about surveilance methods draws a fairly bright line for law enforcement and the courts to follow, but I can think of at least three consequences which are probably worth thinking about.

First of all, as a libertarian, I’m very worried about how much surveilance this does allow. Not only does it allow an unlimited amount of passive surveilance in the visible and infrared bands, it also seems to allow a lot of sophisticated listening devices. (Sound is vibrations in air rather than electromagnetic radiation, but the same principles seem to make sense.) For example, sounds inside a building, including conversation, will leak out as very subtle vibrations which are normally lost in the noise. It’s theoretically possible, however, that an array of sensitive microphones and some very sophisticated signal processing technology could recover the original conversation.

Second, this rule would also allow police to listen to radio transmissions, including cell phones, without a warrant. I think I’m actually okay with this. Before the widespread use of cell phones, it was widely understood that everyone was legally permitted to receive any radio transmission they wanted to. After all, if other people transmitted radio signals in all directions, and some of those signals entered your house, it was pretty ridiculous to claim that tuning a receiver to pick them up was a violation of privacy. It was a simple concept that I’d like to see us return to: If you want privacy, don’t transmit your conversation to everyone within range.

Third, the rule against actively sending something into a private area would seem to rule out a police officer shining a flashlight into window of a building or even a car. That seems a bit ridiculous, even to me. In addition, it would lead to all kinds of ridiculous situations as the police try to work around it. E.g. what if the police officer wears a white windbraker jacket and his partner shines the patrol car’s spotlight on him–ostensibly to make sure he’s safe–causing reflected light from the jacket to shine in a window? Alternatively, if flashlights are allowed, then what about using an infrared flashlight to illuminate a scene for viewing with a thermal imager? This could turn nutty very quickly.

At this point, I kind of have to give up. I can’t seem to come up with a distinction that makes sense in terms of the physics involved and yet still offers adequate protection of privacy. Maybe the laws of physics are the wrong tools for figuring out things like this, or maybe vague and inconsistent rules made from case to case are the best we can do. I’d like to think that the law should make sense in terms of physics, but I’m not sure I have a good reason for believing that.