A few days ago, ProPublica, an independent, nonprofit newsroom, discovered that a tool it was using to track political advertising on Facebook had been quietly disabled — by Facebook. The browser extension had detected political ad campaigns and gathered details on the ads’ target audiences. Facebook also tracks political ad campaigns, but sometimes it fails to detect them. For the past year, the company had accepted corrections from ProPublica — until one day it decided it didn’t want them anymore. It also seems like “they don’t wish for there to be information about the targeting of political advertising”, an editor at ProPublica told me.
Facebook also made news in recent days for another tool: an app, this time its own, designed to give the company access to extensive information about how consumers were using their telephones. Sheryl Sandberg, the company’s chief operating officer, has defended the project vigorously, on the grounds that those who signed up to use this research app knew what they were doing — and were paid $20 a month. Unamused, Apple decided to intervene — and has now banned the app from its phones.
Both of these stories have something in common: They illustrate who is making the rules of our new information network — and it isn’t us. It isn’t citizens, or Congress, who decide how our information network regulates itself. We don’t get to decide how information companies collect data, and we don’t get to decide how transparent they should be. The tech companies do that all by themselves.
Why does it matter? Because this is the information network that now brings most people their news and opinions about politics, about medicine, about the economy. This is also the information network that is fueling polarization, that favors sensational news over constructive news and that has destroyed the business model of local and investigative journalism. The past few days have also brought news of staff layoffs at newspapers around the country, from Arizona to Tennessee to New Jersey.
I have singled out Facebook here because it is the dominant force in social media — like an old-fashioned monopolist, it owns Instagram and WhatsApp, too — but I could write similarly about Google, which is the dominant force in Internet search, or YouTube, which is owned by Google and is the dominant force in the distribution of video content. These companies also operate according to their own rules and algorithms. They decide how data gets collected and who sees it. They decide how political and commercial advertising is regulated and monitored. They even decide what gets censored. The public sphere is shaped by these decisions, but the public has no say.
There is a precedent for this historical moment. In the 1920s and 1930s, democratic governments suddenly found themselves challenged by radio, the new information technology of its time. Radio’s early stars included Adolf Hitler and Joseph Stalin: The technology could clearly be used to provoke anger and violence. But was there a way to marshal it for the purposes of democracy instead? One answer was the British Broadcasting Corp., the BBC, which was designed from the beginning to reach all parts of the country, to “inform, educate and entertain” and to join people together, not in a single set of opinions but in the kind of single national conversation that made democracy possible. Another set of answers was found in the United States, where journalists accepted a regulatory framework, a set of rules about libel law and a public process that determined who could get a radio license.
The question now is to find the equivalent of licensing and public broadcasting in the world of social media: to find, that is, the regulatory or social or legal measures that will make this technology work for us, for our society and our democracy, and not just for Facebook shareholders. This is not an argument in favor of censorship. It’s an argument in favor of applying to the online world the same kinds of regulations that have been used in other spheres, to set rules on transparency, privacy, data and competition.
We can, for example, regulate Internet advertising, just as we regulate broadcast advertising, insisting that people know when and why they are being shown political ads or, indeed, any ads. We can curb the anonymity of the Internet — recent research shows that the number of fake accounts on Facebook may be far higher than what the company has stated in public — because we have a right to know whether we are interacting with real people or bots. In the longer term, there may be even more profound solutions. What would a public-interest algorithm look like, for example, or a form of social media that favored constructive conversations over polarization?
We could make a start with Sen. Amy Klobuchar’s (D-Minn.) proposed bill on honesty in advertising. But the debate needs to be deeper; it cannot include another chaotic, amateurish interview with Facebook chief executive Mark Zuckerberg in the Senate. Constantly changing technology will make it difficult, as will lobbying. But we have regulated financial markets, another sphere where the technology changes constantly, the money involved is enormous, everyone is lobbying, and everyone is trying to cheat.
If we don’t do it — if we don’t even try — we will not be able to ensure the integrity of elections or the decency of the public sphere. If we don’t do it, in the long term there won’t even be a public sphere, and there won’t be functional democracies anymore, either.
Anne Applebaum is a Washington Post columnist, covering national politics and foreign policy, with a special focus on Europe and Russia. She is also a Pulitzer Prize-winning historian and a professor of practice at the London School of Economics. She is a former member of The Washington Post's editorial board.