Immediately after the terrible Easter Sunday bombings in Sri Lanka, the government there took the questionable step of blocking a bunch of social media sites.
There can be legitimate reasons for such drastic measures: If you know the terrorists are using cell phones or the internet to set off remote-detonated bombs, it makes a lot of sense to shut down those communications networks until the situation is resolved.
A less credible reason for shutting down communications is if authorities believe the terrorists are using cell phones or social media or whatever to coordinate their operation. People in our government have talked about developing similar capabilities — everything from being able to shut down cell towers in the middle of a terrorist incident to an “internet kill switch.” The idea is that we can disrupt the terrorists’ plans by denying their access to communications.
I’m not convinced by that argument, however, because it neglects the harm to civilian welfare from shutting down civilian communications. Protecting our national infrastructure has always been one of our domestic anti-terrorism priorities: If the FBI discovered a terrorist cell was going to shut down internet access in a major American city, you can bet the FBI would try to stop them. So doesn’t responding to a terrorist incident by shutting down the internet just give the terrorists a gift by causing fear and disruption?
The Sri Lankan government did not block social media for either of those reasons, however. Their reason was much worse:
The government has decided to temporarily block social media sites including Facebook and Instagram. Presidential Secretariat said in a statement that the decision to block social media was taken as false news reports were spreading through social media.
The Washington Post gives an example of the sort of thing they might be worried about:
Sanjana Hattotuwa, a senior researcher at Center for Policy Alternatives in Colombo who monitors social media for fake news, said he saw a significant uptick in false reports after the bombings Sunday.
[…] He cited two instances of widely shared unverified information: An Indian media report attributing the attack to Muslim suicide bombers, and a tweet from a Sri Lankan minister about an intelligence report warning of an attack.
As Joe Setyon of Reason points out, that’s kind of ironic:
Notably, neither of those instances appears to have been fake news. As previously mentioned, the Sri Lankan government does indeed believe Muslim extremists were responsible for the attacks. And intelligence agencies had been warned about a possible terror attack in recent days, as CBS News reported.
What’s going on here is that the Sri Lankan government is saying they want to protect against false news reports, but I’m pretty sure what they really want is to suppress news they don’t control. That’s usually the real motive when people in power complain about some unruly form of media.
What made me want to blog about all this was the disturbingly positive tone of the response from some corners of the media. Consider Global Voices, an activist organization which describes itself as
…an international and multilingual community of bloggers, journalists, translators, academics, and human rights activists. Together, we leverage the power of the internet to build understanding across borders. How do we do this?
- Report: Our multilingual newsroom team reports on people whose voices and experiences are rarely seen in mainstream media.
- Translate: Our Lingua volunteers make our stories available in dozens of languages to ensure that language is not a barrier to understanding.
- Defend: Our Advox team advocates for free speech online, paying special attention to legal, technical and physical threats to people using the internet to speak out in the public interest.
- Empower: Rising Voices provides training and mentorship to local underrepresented communities who want to tell their own stories using participatory media tools.
You’d assume that the Executive Director of an organization dedicated to “free speech online” and to that idea that “underrepresented communities” “whose voices and experiences are rarely seen in mainstream media” can “tell their own stories using participatory media tools” would be outraged over Sri Lankan censorship. As it turns out, not so much:
A few years ago we’d view the blocking of social media sites after an attack as outrageous censorship; now we think of it as essential duty of care, to protect ourselves from threat. #facebook your house is not in order. #EasterSundayAttacksLK @globalvoices @groundviews
— Ivan Sigal (@ivonotes) April 21, 2019
What really gets me about Sigal’s tweet is that he describes Sri Lankan censorship as a “signal for lack of platform trust.”
My God, who in their right mind cares how the Sri Lankan government feels about media platforms? Since when do we consider any government to be a reliable arbiter of news and truth, let alone the Sri Lankan government. They’re not the worst, but it’s not a good thing that Sri Lanka is rated by Freedom House as only “Partly Free” — with the addendum that the Sri Lankan press is “Not Free.” Furthermore, it turns out Sri Lanka’s President Maithripala Sirisena is an admirer of Philippine President Rodrigo Duterte’s murderous war on drugs, which is definitely not a good sign.
Is Sigal so naive that he believes the Sri Lankan government is really imposing censorship to protect their citizens from “fake news,” rather than employing the time-honored authoritarian trick of exploiting a tragedy to grasp for more power? (Later retweets in his timeline suggest he’s coming to his senses.)
Perhaps the worst response I’ve seen comes from Kara Swisher at the New York Times:
[…] when the Sri Lankan government temporarily shut down access to American social media services like Facebook and Google’s YouTube after the bombings there on Easter morning, my first thought was “good.”
Good, because it could save lives. Good, because the companies that run these platforms seem incapable of controlling the powerful global tools they have built. Good, because the toxic digital waste of misinformation that floods these platforms has overwhelmed what was once so very good about them.
One of the things going on in Swisher’s piece is the natural tendency of social media participants to overestimate how much of social media content is about the things they happen to be following. Those of us who follow politics and political issues tend to assume twitter is mostly about politics and political issues, but there is so much more to Twitter. We filter and curate our feeds, and then forget we are seeing what we want to see.
To be fair, sometimes the filtering is done by algorithm. I’ve seen people complain that every time they search for something on YouTube — even something innocuous — YouTube includes Nazi-themed videos in the results.
That never happens to me, not even if I do the exact same search. Unless these people think anyone to the right of Bernie Sanders is a Nazi, they’re doing something that makes the algorithm think they want to see those results. I want to ask them, “Are you, perhaps, clicking on the Nazi videos to see if they are what they appear to be? Are you visiting lots of racist websites because you are monitoring hate groups?” If so, the search engine is just trying to give you more of what clearly interests you. It’s not smart enough to tell when you’re just hate-reading something.
The same thing can happen with political issues on social media: If the algorithm figures out that’s what interests you, it will keep showing it to you. If you only follow people who repeat unverified rumors, you will see a lot of unverified rumors.
It pains me as a journalist, and someone who once believed that a worldwide communications medium would herald more tolerance, to admit this — to say that my first instinct was to turn it all off.
That’s an excellent instinct. Turning it off is a great response to fake news…for yourself. But please let everyone else make up their own mind.
In short: Stop the Facebook/YouTube/Twitter world — we want to get off.
Obviously, that is an impossible request and one that does not address the root cause of the problem, which is that humanity can be deeply inhumane. But that tendency has been made worse by tech in ways that were not anticipated by those who built it.
I noted this in my very first column for The Times almost a year ago, when I called social media giants “digital arms dealers of the modern age” who had, by sloppy design, weaponized pretty much everything that could be weaponized.
I resent the hell out of the word “sloppy” in that sentence. To the surprise of absolutely no one who knows anything about product development, it turns out that a first-generation world-wide social media site, available to two billion people, every day, for free…isn’t very good. That’s not sloppy, that’s the evolutionary curve of new technology. Just because a brand new piece of technology doesn’t work as well as it could doesn’t mean the inventors were sloppy.
“They have weaponized civic discourse,” I wrote. “And they have weaponized, most of all, politics. Which is why malevolent actors continue to game the platforms and why there’s still no real solution in sight anytime soon, because they were built to work exactly this way.”
Can we take a minute to regret the now widespread use of the word “weaponize” in the metaphorical sense? The first time I saw somebody use “weaponize” that way, it seemed edgy and inventive. It may even have been a fairly clear analogy. But now it seems everyone is talking about everything being weaponized. It’s quickly evolved into a way make something sound sinister without saying anything substantive.
Swisher goes on to discuss the role of social media in the aftermath of the recent mass shooting in New Zealand:
After the attacks, neither Facebook nor YouTube could easily stop the ever-looping videos of the killings, which proliferated too quickly for their clever algorithms to keep up. One insider at YouTube described the experience to me as a “nightmare version of Whack-a-Mole.”
New Zealand, under the suffer-no-foolish-techies leadership of Jacinda Ardern, will be looking hard at imposing penalties on these companies for not controlling the spread of extremist content. Australia already passed such a law in early April. Here in the United States, our regulators are much farther behind, still debating whether it is a problem or not.
By characterizing U.S. regulation of social media as “much farther behind,” Swisher makes it clear that she is on the side of the censorious thugs. The U.S. is not behind on regulating social media. Rather, New Zealand and Australia are weak on free speech and freedom of the press.
Heck, the shitbag authorities in New Zealand are actually trying to send ten people to jail just for distributing the video. A few of them, including a 16-year-old boy, were held without bail, and a guy named Philip Arps has plead guilty and is probably going to jail. Make no mistake, from the description in the news pieces, Arps is an awful human being with awful ideas, but he’s still being locked in a cage for the “crime” of sharing a video that someone in the government didn’t like. (Specifically, the someone in the government who didn’t like the video is David Shanks, Chief Censor in New Zealand. Fuck that guy.)
Getting back to Swisher’s article:
It is a problem, even if the manifestations of how these platforms get warped vary across the world. They are different in ways that make no difference and the same in one crucial way that does. Namely, social media has blown the lids off controls that have kept society in check.
This Is the Future Libertarians WantTM.
These platforms give voice to everyone, but some of those voices are false or, worse, malevolent, and the companies continue to struggle with how to deal with them.
In the early days of the internet, there was a lot of talk of how this was a good thing, getting rid of those gatekeepers. Well, they are gone now, and that means we need to have a global discussion involving all parties on how to handle the resulting disaster, well beyond adding more moderators or better algorithms.
We can have a discussion. Discussions are fine. I’m all for discussions. Just don’t involve government thugs in your response to the “disaster,” because throwing people in jail is not a discussion.
It’s one thing to argue that Facebook (or Twitter or whoever) could improve social welfare by doing a better job of removing objectionable content, but recently social media critics have begun to advance the pernicious idea that social media companies should have a legal obligation to police the content that users post.
That’s actually been the normal rule in print media: The publisher is responsible for what they publish, because they make the decisions about what to publish. On the other hand, the distributor is not generally held responsible, because they don’t have knowledge of the content they are distributing. So if a columnist writes a defamatory article for a newspaper, the columnist can likely be successfully sued for libel, as can the newspaper, but newsstand owners need not worry.
For digital media, the prevailing rule is spelled out in Title 47 USC §230 — commonly referred to as “Section 230” — of the Communications Decency Act, which includes the clause “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
In a real way, these 26 words defined the form of the modern Internet. That protective clause is the reason I don’t have to carefully examine every comment on my blog to see if it contains libelous statements, and that protection is why I can have open comments. Section 230 is also the reason my hosting provider doesn’t have to worry about my content, which means its also the reason why I can have a blog. It’s also the reason Twitter and Facebook and Instagram can exist: These sites host billions of user-provided content items, and they couldn’t possibly afford to review every single one of them for actionable content.
Unfortunately, Section 230 is coming under attack in recent years. The most serious blow so far is the passage of FOSTA (the Allow States and Victims to Fight Online Sex Trafficking Act) which opened up websites to criminal and civil legal action for allowing ads for sex work. A bunch of websites closed down, and opportunistic lawyers launched lawsuits against offending sites such as Backpage, which had for years allowed ads for sex work.
Of course, as predicted months in advance by everyone who opposed FOSTA, Backpage was not the only victim. Lawyers naturally went after the deep pockets at Facebook for not preventing sex traffickers from using its services. Then, rather Amazingly, the lawyers went after Salesforce.com. To be clear, Salesforce.com has nothing to do with sex trafficking. They’re a Fortune 500 company that provides business software solutions loosely organized around customer relationship management. But apparently Backpage was one of their customers, and thanks to the hole FOSTA poked in Section 230, that may make them vulnerable to a lawsuit.
This is a horrible legal development, and it is paralleled by the recent cultural trend of blaming social media sites for content posted by their users. Although arguably an extension of “political correctness” from college campuses — where having to listen to ideas you disagree with is often treated as some kind of “verbal violence” — it seems to have spread to social media in response to the rise of Trumpism and the alt-right, with the accompanying attention given to racists, white supremacists, and more-or-less actual Nazis.
Suddenly, it was no longer enough to merely confront the Trumpists, racists, and what-have-you. Instead, people started yelling at Facebook and Twitter to “get rid of the Nazis.”
(As private enterprises, Facebook and Twitter have every right to refuse access to people who would ruin the environment they are trying to create for their users. But as always, the remedy for bad speech is good speech, not demanding that your ideological enemies be shut out of open forums.)
I’ve even seen people yelling at CloudFlare, of all things, for “protecting Nazis.” Cloudflare isn’t even a social media site. They’re about as content-neutral is it gets: They provide DNS, caching, optimization, and security services to millions of websites (including this one). It just takes a few minutes to sign up for protection, and many of the base services are free. One of those services is protection against denial-of-services attacks, and so people are mad at CloudFlare because they tried to crash Stormfront’s website and couldn’t.
I see this, and I wonder what’s next. Are people going to accuse utility companies of “supporting white supremacists” for selling them electricity or gas? Is Walmart going to be guilty of supporting racists if they sell toilet paper and cleaning supplies to Stormfront headquarters? There’s a certain level of madness to this, and I don’t know where it ends.
The other source of the trend toward holding social media platforms culpable seems to be the discovery of Russian intelligence efforts to influence the 2016 election through social media. It boggles my mind that Congress held hearings about social media and blamed Facebook for this, as if Facebook was supposed to carefully evaluate every single ad for its political content. That’s not what Facebook is built for. That’s not what social media is for.
In the 15th century, the Gutenberg press reduced the cost of publishing so much that almost anyone could afford to read written works. Now in the 21st century, the world wide web has reduced the cost of publishing so much that almost anyone can afford to publish written works. With costs that low, companies like Facebook don’t have the resources to mediate more than a tiny fraction of the material that flows through their platforms. Instead, readers are expected to curate their own reading choices. The role of the platforms has been to provide them with efficient means to do so.
I thought everybody understood this — that everyone realized social media was a nationwide electronic bathroom wall — and that we all knew it was going to be a wild ride. But now people are demanding that social media sites step up and do a job we never asked them to do before.
I have so many questions about the thinking here. Did people do this with telephones when they were the scary new technology? I’m sure people were upset by political calls during elections, but did they blame the phone company for them? Were TV stations blamed for allowing awful political ads to run? What about other misdeeds? Was Henry Ford hauled in to testify before Congress because gangsters were using cars to commit crime sprees? Or because teenagers were having sex in them?
Sigh.
I know that record labels and comic book publishers and video game houses have all been blamed for society’s ills at one time or another. So I understand on some level that this kind of moral panic at new media is not unusual.
I guess I was hoping that, just this once, we had skipped that miserable tradition.
Humble Talent says
Couple of things:
“Can we take a minute to regret the now widespread use of the word “weaponize” in the metaphorical sense? The first time I saw somebody use “weaponize” that way, it seemed edgy and inventive. It may even have been a fairly clear analogy. But now it seems everyone is talking about everything being weaponized.”
I’m 100% guilty of this. Back during the Clinton server debacle, People would simultaneously (and ever so righteously) tell me they didn’t know the first thing about classifications and that I was obviously wrong, bought and paid for, and a member of the Nazi party.
First off: Where’s my damned money?
Second… I labelled this “weaponized ignorance”; They didn’t know what they were talking about, but they had the inkling that if they learned it would impede their narrative, so instead they wrapped themselves in the holy armor of disbelief and wielded their ignorance like a club. It’s not unique to the left, by any stretch of the imagination, but I’d be hard pressed to find a better example of it.
I’m not sure that I regret that. In hindsight; you’re right, the term is criminally overused, but I’m struggling to come up with a better one. I choose to blame a critical lack of imagination on my part.
“The other source of the trend toward holding social media platforms culpable seems to be the discovery of Russian intelligence efforts to influence the 2016 election through social media. It boggles my mind that Congress held hearings about social media and blamed Facebook for this, as if Facebook was supposed to carefully evaluate every single ad for its political content. That’s not what Facebook is built for. That’s not what social media is for.”
I think this is deflection. At this point, it’s beyond obvious that the Russians *tried* to influence the American national election via social media, I’m just doubtful of the efficacy of those attempts.
Sure, they bought some ads that people saw and sure, they manned some Twitter accounts…. But I’m not sure if the people screaming Russian Interference were watching the same election I was; Everyone and their dog was taking ads out on Facebook, the largest Russian Twitter account had 100,000 followers, which might seem large, but when you compare if to literally anyone else, it starts to pale and start to take other platforms into account, like YouTubers with literall Millions of followers and tens of millions of views, then I just can’t bring myself to be too bothered.
I’m not saying it was good that a hostile foreign power attempted to influence an American election, I’m just saying I don’t think they were particularly good at it. And this all seems a very contrived way to deflect from the fact that Democrats ran the most unpopular candidate in the history of the union and lost.
Mark Draughn says
I first encountered the word as jargon in the biowarfare defense community, referring to the practice of taking a natural disease-carrying agent like anthrax spores or smallpox virus and modifying it for use as a bioweapon, but I believe it was common military jargon for adapting technology for use as a weapon.
So there’s nothing really wrong with figurative uses of “weaponized.” I just got tired of everybody using it. But maybe it’s helpful to have a term to point out when some seemingly neutral thing is used to hurt someone.
I too was unimpressed by Russian attempts to influence the election through social media. They posted what? A couple of hundred thousand posts? Doesn’t Facebook have something like 3 trillion posts per year? It’s ridiculously small.
And let’s face it, Russia doesn’t have a lot of experience with real democratic elections. What are they chances that anyone there knows how to produce ads anywhere near as effective as ads by our professional election operatives?
Humble Talent says
On the topic of Russian Social Media…. Which come to think of it is probably a contradiction in terms…
I think it’s useful to look not only at who was supported, but how they were supported. Most of those posts were extreme partisan red meat thrown at people who were already extreme partisans. For most of the election, Russian outfits weren’t supporting either side exclusively, so much as they were fanning the flames. If you look at their efforts as an exercise in sowing discord as opposed to the support of an individual or ideology, their actions seem to make so much more sense, and have been effective as heck.
I mean, the alternative is that Rassia was putting their eggs in the basket of Donald Trump beating Hillary Clinton, and expressing that support by, in part, taking out anti-Trump, pro-Hillary ads in places like California.