Mark Zuckerberg Can Still Fix This Mess

Mark Zuckerberg Can Still Fix This Mess

The news about Facebook is not getting better. The company has sharply increased the number of users whose data was improperly shared with an outside company connected to President Trump’s campaign, to possibly 87 million. Amid an outcry, Mark Zuckerberg, the company’s chief executive, is expected to testify before Congress on Tuesday and Wednesday.

There is a dog-eared playbook for industry titans called before lawmakers: Apologize repeatedly, be humble, keep it boring. Mr. Zuckerberg can and should toss that playbook. He’s got $60 billion to his name, 99 percent of which he has said he will donate to charity. And he controls Facebook — is Facebook — in an unusual way: He controls 60 percent of its shareholder votes. So he doesn’t have to worry about next month’s subscriber count or how to deflect a hardball question from a committee chairman. He can contemplate posterity with big ideas geared to a public interest. Given Facebook’s domination of social media, anything the company does — including a devolution of its power — will serve as a model for others.

To get a sense of the new approaches he should take, consider why Congress is calling hearings. The core offenses begin with classic and now pervasive online privacy violations.

In 2014, 270,000 people were paid by an outside developer to install a Facebook app and answer questions like “Do you panic easily?” and “Do you often feel blue?” They weren’t told that their answers would become part of a psychological profile used by a voter profiling company, Cambridge Analytica — first to assess how they might vote and second to design personalized advertising for the purpose of changing their political views or their likelihood of voting, all to favor the agenda of Cambridge Analytica’s funders and clients. The app also scooped up information from the typically nonpublic profiles of the quiz-takers’ friends — turning 270,000 people into 87 million.

The violations are a big deal, even though this type of profiling is still hit and miss. Incentives push toward keeping and copying data rather than deleting it — and using it to, say, quietly target the credulous with ads for snake oil, limit the groups who see certain real estate ads or serve African-American voters with ads designed to depress Election Day turnout.

Currently there is no way for us to retract information that previously seemed harmless to share. Once tied to our identities, data about us can be part of our permanent record in the hands of whoever has it — and whomever they share it with, voluntarily or otherwise. The Cambridge Analytica data set from Facebook is itself but a lake within an ocean, a clarifying example of a pervasive but invisible ecosystem where thousands of firms possess billions of data points across hundreds of millions of people — and are able to do lots with it under the public radar.

Several years ago Facebook started to limit what apps could scrape from friends’ profiles even with permission, but the basic configuration of user consent as a bulwark against abuse hasn’t changed. Consent just doesn’t work. It’s asking too much of us to meaningfully respond to dialogue boxes with fine print as we try to work or enjoy ourselves online — and even that is with the naïve assumption that the promises on which our consent was premised will be kept.

There are several technical and legal advances that could make a difference.

On the policy front, we should look to how the law treats professionals with specialized skills who get to know clients’ troubles and secrets intimately. For example, doctors and lawyers draw lots of sensitive information from, and wield a lot of power over, their patients and clients. There’s not only an ethical trust relationship there but also a legal one: that of a “fiduciary,” which at its core means that the professionals are obliged to place their clients’ interests ahead of their own.

The legal scholar Jack Balkin has convincingly argued that companies like Facebook and Twitter are in a similar relationship of knowledge about, and power over, their users — and thus should be considered “information fiduciaries.”

Doctors don’t ask patients whether they’d consent to poison over a cure; they recommend what they genuinely believe to be in the patients’ interests. Too often a question of “This app would like to access data about you, O.K.?” is really to ask, “This app would like to abuse your personal data, O.K.?” Users should be respected by protecting them from requests made in bad faith. And just as a doctor should encourage a patient to get second opinions, Facebook should allow its users to write and share their own formulas for how their news feeds should be populated, rather than making a one-size-fits-all decision that, say, updates from friends and family are to be prioritized over news.

As well-trained artificial intelligence devices get better at making unsettling inferences about us based on a handful of “likes” or places we’ve visited, we must recognize that the power to predict and shape our behavior lies less in whether we share our own data and more in whether others do. A fiduciary approach would demand that companies not betray our interests by using others’ data against us.

Facebook needs to do more than just identify political ads and who’s behind them, a change it announced on Friday. All of its ads, political or otherwise, should be available for anyone to peruse, not just those targeted to receive them. Most of us won’t spend a Sunday looking through Facebook ads, but public interest groups and consumer protection officials will, and they’ll publicize and be in a position to act on the troubling things they find.

Given the blowback around current privacy and advertising practices — and the threat of regulation, especially from the European Union — companies like Facebook should do the right thing and commit to representing users’ interests. And the law could nudge them in that direction without outright requiring it. These actions might reduce Facebook’s growth or profitability, but that is not a compelling reason to stop doing something harmful. It may be that aspects of an advertising-based business model are indeed incompatible with ethically serving users, as polluted streams are incompatible with ethically mining coal.

Facebook does contribute to efforts to improve our digital lives, including one on youth education at the center I co-founded. But in terms of privacy, it could get the ball rolling on the adoption of new technologies that can help. Users could be given more options for expiration dates for what they share — and a chance to find out instantly where their data has traveled. Photos and data can be beneficially tweaked so that individuals involved are not digitally identifiable.

The freedom to store and exchange data without boundary or prohibitive cost should be a boon to the world and to ourselves, rather than something that dehumanizes us. But understanding the phenomenon is growing harder, because the scrutable openness of the early years of internet development has given way to the walled gardens of Facebook, Twitter and Instagram. We need to avoid the worst of all worlds for privacy, in which a few companies collect and traffic in our data, with a fig leaf of consent, while researchers and journalists who are trying to grasp the dimensions of their actions are locked out.

Mr. Zuckerberg has the power to shake things up. He could bind his company to practices and technologies aimed at a sea change on user privacy and autonomy. Rather than circling the wagons, Facebook can join the cause. If it doesn’t, people should disperse to platforms that will.

Jonathan Zittrain is a professor of international law and of computer science at Harvard, a co-founder of the Berkman Klein Center for Internet & Society and the author of The Future of the Internet — And How to Stop It.

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *