A word of advice for Congress as it ponders new schemes for Internet regulation after the “perp walk” this week of Facebook tycoon Mark Zuckerberg: Don’t do it.
Zuckerberg is a tempting target. His serial apologies show how Facebook became so entangled in its corporate mission to “bring the world closer together” that it stopped putting the customer first.
Facebook is paying for its mistakes in loss of customer trust — its main asset — and this market punishment has only just begun. It’s obvious to users now that Facebook’s business model isn’t about making the world better, but about obtaining information about its customers and profiting from it.
The social media site illustrates the buzz phrase: “If you’re not at the table, you’re on the menu.” Meaning, Facebook has been giving us a free service because it can monetize our data. We’re the product it’s selling. If we don’t like that, then Facebook can charge us money for its service, as Zuckerberg testified Tuesday.
Facebook will make changes to recover its reputation. Users will have better control over their privacy, perhaps by having to opt in before their data is shared. Zuckerberg outlined other needed reforms: The company will restrict the data it shares with app developers, increase its security and require political advertisers to confirm their identities.
Would more government regulation make things better? Federal oversight might nominally increase transparency and accountability, but it would mainly make work for lobbyists and lawyers. This is a case where angry customers and newly skeptical investors will be the best cops.
What worries me about the Internet is something else. Is the underlying “marketplace of ideas” experiencing market failure? My business of journalism is predicated on the idea that in the unregulated competition of ideas, the truth will eventually prevail. But this process seems to be breaking down as the Internet fosters a “post-truth” era. The public wants its biases to be affirmed these days, not challenged.
The corruption of information technology was debated this past weekend at a conference at Princeton University on “Defending Democracy,” for which I was a keynote speaker. Vint Cerf, who helped build the Internet, reminded the audience that it was created to be open, borderless and unregulated. When the Web’s founders thought about “bad behavior,” they had in mind rowdy graduate students. A world where Russia’s Internet Research Agency could feed fake news to Facebook was unimaginable — or at least unimagined.
The information space needs not more government intervention but less, especially in places such as Russia and China. But as I watched Zuckerberg being grilled Tuesday, it was obvious that this market can be better protected by the companies, so that hidden incentives don’t skew it toward extremism and toxicity.
Zeynep Tufekci, a professor at the University of North Carolina at Chapel Hill, noted at the Princeton conference that algorithms push YouTube viewers toward ever-more-intense content. If you keep clicking on videos about running, you’ll eventually get ultramarathons, she noted. Similarly, if you like Donald Trump or Hillary Clinton videos, algorithms will push you toward more extreme content on the right and left.
It’s not a conspiracy; the algorithms are just maximizing the number of ads they display to users. Social media companies could address this problem by making their algorithms more transparent.
As we think about Facebook’s failures in combating Russian meddling, we should recall the United States’ history of overreacting to external threats. The Alien and Sedition Acts of 1798 sought to combat French political meddling; McCarthyism began with legitimate fear of Soviet espionage but led to blacklists and purges. The cure is sometimes worse than the disease.
The market will correct most of Facebook’s problems. What should concern us, beyond fake news, is fake reality — images of events that never happened, voiceprints of speeches that were never delivered, phone calls that were never made, texts that were never sent. The term for these all-too-feasible digital manipulations of audio and video content is “deep fakes.”
A Defense Advanced Research Projects Agency media-forensics team is creating tools that, in theory, can automatically detect when video or audio images have been altered. I’d be happier if the reality-detection system were operated by private companies. I fear we’re heading toward a world where a future national security adviser, in response to Russian or Chinese deep fakes, might ask: If “they” can shape our reality, do “we” need to be able to shape theirs?
Zuckerberg looked so uncomfortable Tuesday in his coat-and-tie contrition costume, you almost felt sorry for him. He made us realize that the weak link in the Internet system isn’t a lack of government oversight, but our own gullibility.
David Ignatius writes a twice-a-week foreign affairs column and contributes to the PostPartisan blog. Follow @ignatiuspost