A few days ago, U.S. Attorney General Bill Barr held a meeting on the future of Section 230, the law that protects web site operators from liability for content contributed by visitors. Reason magazine’s Elizabeth Nolan Brown has already covered all the ill-informed ideas, raging authoritarianism, and crony capitalism, and Techdirt has a series of posts as well, so I thought I’d do my part and take a look at what Stewart Baker, Cyberlaw podcast host and former Assistant Secretary for Policy at the Department of Homeland Security, had to say about it. As in the last piece of his I wrote about, I didn’t find much to agree with.
When section 230 was adopted, the impossibility of AOL, say, monitoring its users in a wholly effective way was obvious. It couldn’t afford to hire tens of thousands of humans to police what was said in its chatrooms, and the easy digital connection it offered was so magical that no one wanted it to be saddled with such costs. Section 230 was an easy sell.
A lot has changed since 1996. Facebook and other have in fact already hired tens of thousands of humans to police what is said on their platforms.
First of all, they still can’t hire enough people, because these kinds of platforms have far more users than they did in 1996. At its peak, AOL had 23 million users. As I write this, Twitter alone has 330 million active users. Facebook has 2.5 billion users. Even relatively small services like Instagram and Snapchat have over 100 million users. And people can access these service from devices they carry in their pockets, so each user produces far more traffic.
Second, the argument which Baker accepts for why small firms needed Section 230 protection in 1996 still applies to small firms today. Maybe a monster service like Facebook can figure out a way to monitor user content economically (despite not charging ordinary users a dime), but smaller services that serve niche communities, like Deviant Art, Model Mayhem, and Stack Overflow, would have a hard time achieving the needed level of monitoring, as would new services that hope to compete with the larger established ones.
Today, jurisdictions as similar to ours as the United Kingdom and the European Union have abandoned such broad grants of immunity, making it clear that they will severely punish any platform that fails to censor its users promptly.
Yeah, and there’s no way that could go wrong. (Blasphemy laws, anyone?) Strong protections for free speech has long been one of the ways that the U.S. is very much not similar to jurisdictions such as the United Kingdom and the European Union, and we are better off for it.
That doesn’t mean the US should follow the same path. We don’t need a special, harsher form of liability for big tech companies. But why are we still giving them a blanket immunity from ordinary tort liability for the acts of third parties? In particular, why should they be immune from liability for utterly predictable criminal use of warrant-proof encryption? I’ve written on this recently and won’t repeat what I said there, except to make one fundamental point.
Baker brings up variations of “predictable criminal use” over and over throughout this piece. It’s a ridiculously broad concept, because so many things can be misused by criminals. Every crime of mail fraud makes predictable criminal use of an envelope and the U.S. mail system. Every purse snatcher is making predictable criminal use of his sneakers. And I’ll bet Excel spreadsheets are introduced as evidence against a ton of white collar criminals who were using them to track their criminal financial schemes. But that doesn’t mean the makers of those products are responsible for the crimes committed with them. Saying that encryption has predictable criminal uses is just as meaningless.
The key to thinking clearly about encryption policy is understanding who is doing the encryption. It’s not who Baker says it is:
Section 230 allows tech companies to capture all the profits to be made from encrypting their services while exempting them from the costs they are imposing on underfunded police forces and victims of crime.
When Baker speaks of tech companies “encrypting their services” he is using a bit of rhetorical slight-of-hand, similar to what Orin Kerr did in a piece I criticized a few years ago.
To start with, remember that most traffic between web browsers or phone apps and application servers is already encrypted. (If you’re reading this post directly on my blog, your browser app received an encrypted copy of the page from my WordPress application.) When Baker talks about companies encrypting their services, he means end-to-end encryption, where two people use the same service to send data between themselves that is not decrypted by the intermediate server. The sender’s device encrypts the data, and that encrypted data is sent to the receiver’s device, where it is decrypted, but it remains encrypted everywhere in between.
For example, if I use my iPhone to text my wife to ask her to pick up a burrito on her way home, I’m using end-to-end encryption. We both have iPhones, so the Messages app on my phone knows to encrypt the message before sending it through Apple’s iCloud, and her iPhone knows how to decrypt it. It’s important to notice that all of the encryption and decryption is happening on our phones, which we own, using software that we licensed, because we chose to have our messages encrypted. It’s also important to notice what’s not happening in that scenario: The Apple iCloud is not encrypting anything. The iCloud itself is just copying blocks of already encrypted data from one phone to another. That encrypted data happens to be a message asking for a burrito, but the iCloud has no way of knowing that, because to iCloud’s servers (and to Apple corporation), it’s all just meaningless data to be copied from one device to another.*
That’s the whole point of end-to-end encryption: It allows users to communicate securely without worrying that the companies carrying their data are listening in.
That’s a crucial idea if you want to understand why Baker’s argument is wrong: End-to-end encryption isn’t something that the big tech companies are doing with encryption. Rather, end-to-end encryption is big tech companies giving up control of encryption to their customers.
Unfortunately, this distinction is sometimes muddled by the fact that the company that licences us the encryption software (e.g. the phone app) is often the same company that runs the intermediate servers that transport our data. But this is just an artifact the way the phone and tablet app market is structured: You buy the app in your device’s app store, and the app vendor provides the transport and storage services for free. App vendors do that because the cost of transport and storage of small messages is so low that it’s worth giving away for free to attract buyers.
But it doesn’t have to be that way. The older desktop/laptop/server computer community, especially in the business world, has been using unbundled encryption software and message transport services for decades. For example, there are third-party Chrome browser plugins that allow end-users to add end-to-end encryption to existing services like Gmail. Variations on this idea have been around for a long time. The original PGP software is almost thirty years old. All this technology is already in the world, and I’d expect this market to explode if tech companies are somehow prohibited from offering end-to-end encryption directly.
Many believe that the security value of unbreakable encryption outweighs the cost to crime victims and law enforcement. Maybe so. But why leave the weighing of those costs to the blunt force and posturing of political debate?
So that everyone involved will know what the rules are? So that they can make decisions based on those rules without having to worry about the outcome of a bunch of random court cases? This is why we write down laws.
Why not decentralize and privatize that debate by putting the costs of encryption on the same company that is reaping its benefits? If the benefits outweigh the costs, the company can use its profits to insure itself and the victims of crime against the costs.
Here we go again. Why should the company that makes the tools be liable for what a criminal third party does with them? We don’t do that with hammers and baseball bats and crowbars. We don’t even do that with guns. It’s not a very good idea.
Baker goes on to deploy one of the more annoying tropes in the secure encryption debate:
Or it can seek creative technical solutions that maximize security without protecting criminals – solutions that will never emerge from a political debate.
This is a fantasy. An incoherent, manipulative fantasy. Politicians and pundits in Baker’s camp keep portraying this as some sort of technical problem that big tech is either too lazy or too greedy to solve, but it’s not. Either the encryption can be broken by a party not involved in the conversation, or it cannot. There is no middle ground. Asking for secure encryption that can be broken is like asking for a sober drunk or a sexually experienced virgin. You’re not asking for a real thing that exists. You’re not even asking for something with a logically consistent description.
Either way it’s a private decision with few externalities, and the company that does the best job will end up with the most net revenue. That’s the way tort law usually works, and it’s hard to see why we shouldn’t take the same tack for encryption.
Sure, go after the people doing the harm. But the companies that make the encryption software aren’t the ones doing the encryption, and they aren’t the ones sending the harmful content. Ford Motors made the car that Bonnie and Clyde used to rob banks, but nobody sued Ford because it wasn’t Ford robbing the banks.
The second part of Baker’s post addresses the issue of platforms that censor user-provided content:
Europe is not alone in its determination to limit what Americans can say and read. Baidu has argued successfully that it has a first amendment right to return nothing but sunny tourist pictures when Americans searched for “Tiananmen Square June 1989.” Jian Zhang v. Baidu.Com Inc., 10 F. Supp. 3d 433 (S.D.N.Y. 2014). Today, any government but ours is free to order a US company to suppress the speech of Americans the government doesn’t like.
I’m pretty sure that’s only for companies that operate in those countries. Also, “Those other guys are censorious asshats so we should be censorious asshats too” is not the winning argument Baker seems to think it is.
American politicians worried that radio and television owners could sway popular opinion in unpredictable or irresponsible ways. They responded with a remarkable barrage of new regulation – all designed to ensure that wealthy owners of the disruptive technology did not use it to unduly distort the national dialogue. […] This entire edifice of regulation has acquired a disreputable air in elite circles, and some of it has been repealed. Frankly, though, it don’t look so bad compared to having a billionaire tech bro (or his underpaid contract workers) decide that carpenters communicating with friends in Sioux Falls are forbidden to “deadname” Chelsea Manning or to complain about Congress’s failure to subpoena Eric Ciaramella.
Yes it does. The billionaire tech bros come and go, and if they behave badly, something new will come along to replace them. Government regulation is much longer lasting and much harder to evade, and the people enforcing it don’t just shadowban your tweets and demonetize your videos. They have guns and prisons and they hurt people.
The sweeping broadcast regulatory regime that reached its peak in the 1950s was designed to prevent a few rich people from using technology to seize control of the national conversation, and it worked.
What? The broadcast regulatory regime practically handed control of the whole industry over to a few mega-corporations. The regulations were so burdensome that despite all the promises of the new technology, the industry stagnated so badly that Americans only had three television networks for years.
Viewed from 2020, that doesn’t sound half bad. We might be better off, and less divided, if social media platforms were more cautious today about suppressing views held by a substantial part of the American public.
But at least without regulation, and with the protection of Section 230, anyone can start a social media platform. There are tons of them out there. You couldn’t do that during the heyday of the highly regulated broadcast industry when there were, I say again, only three major content providers.
It seems clear to me, and I hope I can convince you as well, that all of this is part of an attempt to subvert privacy concerns using a well-worn trick or two.
The first trick is to control the behavior of large numbers of people by controlling a much smaller number of large, highly-visible entities that enable them. For example, suppose you want to stop people from making their own clothing. (Perhaps you think custom clothing is sinful, or undermines community values, or you’re in the pocket of Big Garment. Whatever.) You could try straight-up criminalizing it. That’s a direct solution, but it has a lot of practical problems. You can’t easily catch millions of people sewing stuff in the privacy of their homes, where it’s almost impossible to detect. The police would need probable cause to enter people’s homes to search for signs of garment making. They’d need networks of informants, and maybe a tip line (“Friends don’t let friends do buttonholes.”). They might have to start tracing precursors purchases of thread and fabric. Or maybe they’ll train cops to spot people wearing home-made garments so they can be arrested and pressured to reveal their sources. It sounds like a lot of hard work, but I’m not saying it couldn’t be done (see e.g. the war on drugs). However, it would be one awful mess (see e.g. the war on drugs).
There’s an easier way: Outlaw the manufacture of sewing machines. Not all sewing machines, of course, because there’s a huge commercial industry that depends on them. You just want to outlaw the cheap ones that people use in their homes. You know, the ones that are “unsafe,” the ones that produce “second-rate clothing” (that might endanger children!) because they lack important (and expensive) features. You just want to put a stop to millions of housewives using cheap Saturday night special sewing machines.
If you promote and pass that law carefully, the people most affected might not even see it coming. And now you don’t have to go after millions of people secretly sewing in their own homes. You just need to shut down a dozen or so highly visible sewing machine manufacturers who are too big to hide and have too much to lose by fighting. That will shut down millions of illegal home garment makers without the trouble of catching them one by one.
In a nutshell, that’s what I think Baker and Barr and other foes of end-to-end encryption are trying to do. Coming, as they do, from the intelligence and law enforcement world, they’re offended by the idea that ordinary citizens can send messages they can’t read. I suspect they’d probably like to outlaw strong encryption in civilian hands entirely, but that’s politically difficult to sell and technically difficult to enforce.
(It might even be unconstitutional. I don’t know how to find relevant court cases, but I think there’s a pretty good argument for a free speech right to encrypt. If you have the right to speak in any language, shouldn’t you have the right to speak ciphertext? If you have the right to say “Fuck the President,” shouldn’t you have the right to say “*@6cTHKafZq034CdW9GVF5##!Y$6aZCK
“?)
On the other hand, if they can prevent a few large services like Facebook from offering end-to-end encryption services, they can accomplish much of their goal of preventing Americans from using secure encryption without having to take on millions of individual Americans in the process.
This sort of legislation is often a disaster. The RAVE Act, for example, was passed in reaction to what was seen as the threat of “rave” parties, which were ripe with drug abuse, especially ecstasy. Drug use at any kind of party was, of course, already illegal, but rather than go after individual drug dealers and users, police wanted to shutdown the venues that held the parties, and they hoped to do this with the RAVE Act, which would “prohibit an individual from knowingly opening, maintaining, managing, controlling, renting, leasing, making available for use, or profiting from any place for the purpose of manufacturing, distributing, or using any controlled substance, and for other purpose.”
That sounds like it’s intended to go after drug dealers who throw parties and advertise them with “Hey kids, come here and do drugs!” As written, however, it could be used go after almost any bar or dance club — or any other place — where the owner or manager knew that drugs were present. In theory, that’s almost anywhere, and it gave cops plenty of excuses to selectively shut down places that annoyed them.
Making matters worse, ecstasy is known to cause problems with temperature regulation, so many clubs added cool-down rooms and sold bottled water. This was used as proof that they knew drugs were being used. It’s not hard to imagine that any step a social media service took to combat child pornography would be used as proof they knew people were using their service for child pornography. (Something similar happened to Backpage when they were accused of promoting human trafficking.)
The second trick folks like Barr and Baker are using is a workaround for the problem that getting a criminal conviction requires meeting some pretty high standards. (Not so high that there aren’t 2 million Americans in prison, but that’s a matter for another post.)
For example, crimes have to be intentional. If you loan your car to a friend, and he drives drunk and gets in a fatal accident, you are almost certainly not guilty of any crime related to the death, because you did not intend that he drive drunk or kill someone . However, it’s plausible that you could be sued successfully on the theory that you knew, or should have known, that he was going to drive drunk. This is especially true if you are in the business of loaning cars to people. Furthermore, criminal guilt has to be proven “beyond a reasonable doubt,” which is a pretty high bar, but civil cases can be won by a “preponderance of evidence,” meaning you just have to convince the judge or jury that your story sounds more likely than the other guy’s story.
Corporations are legal fictions, and charging a legal fiction with a crime is kind of a weird thing to do. If you win a conviction against a corporation, then what? You can’t send a corporation to jail. In practice, convicted corporations usually have to pay a fine, and maybe agree to certain restrictions. But why go through all the trouble of a criminal proceeding against a corporation when when it’s so much easier to get them to cough up cash and change their behavior through a civil lawsuit? It doesn’t even have to be the government that files the lawsuit. Legislators can pass laws creating a cause of action for a third party to file a lawsuit.
(None of this is inherently pernicious. In fact, a lot of civil rights laws work this way. If you get fired because your boss doesn’t like your skin color, you don’t file a complaint with law enforcement. You get a lawyer and sue your former employer. It makes a lot of sense: The injured party directly sues the business that caused the harm. But even civil rights laws can be abused. Discriminatory ads placed by home sellers have been used as a justification for suing entire newspaper chains, and unscrupulous law firms have figured out ways to leverage poorly written regulations to sue tons of businesses over technical violations of the Americans with Disabilities Act even though few actual disabled people had complaints.)
But this legal arrangement can also be abused by legislatures, to suppress behavior they cannot otherwise control — making gun manufacturers liable for what criminals do with their guns, or making bars liable for customers who drive drunk. It’s an end run around rights and due process, and I for one do not appreciate it.
I also don’t think the government has any business restricting when we are allowed to use encryption. As someone smarter than me once said, the Founding Fathers gave us the First Amendment because they knew we had something to say. And they gave us the Fourth Amendment because they knew we had something to hide. Encryption helps us do that.
*Technical Note: The Messages app on iPhones is end-to-end encrypted, but for various unrelated reasons, iPhone messages are not very secure and should not be used for high-stakes communications. In addition, although I mention several security products in this post, I neither endorse nor discourage their use.
Leave a Reply