Winston Smith works at the Ministry of Truth. Each day, the hero of George Orwell’s “1984” “corrects” old newspapers to make sure that the information is in still accord with the current Party line. After rewriting history, he puts each “incorrect” story into a “memory hole” — a slit in the wall — and it is “whirled away on a current of warm air to the enormous furnaces which were hidden somewhere in the recesses of the building.”
Orwell’s portrayal of censorship is fictional. But, until very recently, it wasn’t all that far off from the reality. “Censorship” was an activity carried out by authoritarian states, and sometimes by democracies, which used repressive mechanisms to control political speech. Policemen, or bureaucrats such as Smith, would mark up books and articles, remove offending passages, or prevent them from being published. They would rewrite history books, or retouch photographs, if they showed uncomfortable truths about the past. Sometimes they simply arrested people who spoke or wrote things deemed dangerous to the ruling party.
This isn’t how censorship works anymore, or at least not all of the time. Nowadays, many of the states (and lots of parties and groups) that want to censor political speech are doing so in an information environment that has been rapidly and massively transformed. Once, speech was scarce and it was possible to control the speakers. Now, the attention of listeners is scarce — and speakers and their words can simply be drowned out.
This idea was brilliantly articulated a couple of years ago by Tim Wu, a Columbia law professor, in an essay that asked “Is the First Amendment obsolete?” Wu pointed out that a state — or, indeed, anyone — that seeks to control information no longer needs bureaucrats or policemen: Instead, the opponents of free speech can drown out ideas and language they don’t like by using robotic tools, fake accounts, or teams of real people operating multiple accounts. They can flood the information space with false, distracting or irrelevant information so that people have trouble understanding what is real and what is fake.
Alternatively, they can use those same robotic tools, fake accounts and dedicated teams to troll individuals with hateful commentary or smears that make them afraid to speak, or difficult to be heard or believed. In the new information world, these are the real threats, both to free speech and to civilized public discourse — even to democracy itself. If we can’t have a public debate because the information space is so polluted, or because people are afraid of the reactions of organized trolls, then we can’t really have meaningful elections anymore, either.
I’m writing about this because, last week, I wrote about Internet regulation. One of the most common — and surprising — responses to the article was to interpret it as advocacy of “censorship,” as though some old-fashioned, Orwellian form of censorship were still the only threat to free speech, and as though the organizations most likely to carry out this kind of censorship were still nation-states.
In fact, the organizations now best poised to act as old-fashioned censors are the tech companies that provide the platforms where most online conversations take place. Indeed, we are facilitating them in this role. We now expect Google, Facebook, Twitter and other companies to police the Internet for dangerous and illegal material — violent, terrorist, criminal — and some democratic governments require them to do so. But what if they did decide to repress material for political reasons? How would we know? According to a report this week, Google has, at the request of the Russian government, already agreed, quietly, to eliminate some websites from its searches in Russia.
But I repeat: Arguments over the removal of material are not the most difficult part of this discussion. The truth is that many of the new free-speech issues don’t resemble anything anybody has tried to legislate in the past. Wu notes that almost no one “forecast that speech itself would become a weapon of censorship,” which is why almost nobody is prepared to ask the right questions. How do you stop swarms of trolls — Saudi, Russian, or American alt-right — from swamping accounts and issuing death threats to anyone with whom they disagree? How do you combat the bot-driven disinformation campaigns, operating in multiple languages, that give false prominence to extremists and, for some people, drown out alternatives? How do you ensure that people are not too intimidated to speak or write online?
I am not sure we know yet how to cope with these new forms of censorship. But I am certain it will be possible only if we are willing to think deeply about the architecture of the Internet, and only if we are willing to change it. This is what I mean by regulation: Make the Internet a place for reasoned and fair conversations. That’s not censorship, that’s the new struggle for freedom of speech.
Anne Applebaum is a Washington Post columnist, covering national politics and foreign policy, with a special focus on Europe and Russia. She is also a Pulitzer Prize-winning historian and a professor of practice at the London School of Economics. She is a former member of The Washington Post’s editorial board.