What Is Facebook Worth to Us?

It is easy to forget how new Facebook is, but remember I do. The first time I logged in to the social networking site was around 2006. I was taking classes in an adult enrichment program at a small Catholic school in Charlotte, N.C. A young literature professor used Facebook to cultivate informal communication with the traditional-age students. I joined Facebook using my university email address, which at the time was required to sign up.

Facebook’s layout and organization of information — what scholars now call “affordances” — were not intuitive to me. It forced me to “like” the literature professor who encouraged us to sign up, followed by the other students in the class. I did not realize that this digital space was an extension of the university’s institutional life, so I was surprised and dismayed when the professor scolded me for making a joke on my Facebook wall. I dropped that class and deactivated that, my first Facebook account. I would not try again for two more years. By that time, anyone above age 13 with an email address could join the platform. At the time, this expansion felt like a democratization of an elite online platform. It is clear now that this was also the moment that Facebook was set on the course to becoming the political boondoggle it is today.

Opening up Facebook gave it incentives to scale and to make scale its No. 1 priority. When platforms prioritize scale over users’ safety or even the user experience, the people who own the platform have chosen a set of political beliefs that inform their economic decisions.

What Is Facebook Worth to Us?
Diana Ejaita

Tarleton Gillespie is a principal researcher at Microsoft Research New England, and an affiliated associate professor at Cornell University. He is also the author of “Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media.” Tarleton has argued that “platforms now function at a scale and under a set of expectations that increasingly demand automation. Yet the kinds of decisions that platforms must make, especially in content moderation, are precisely the kinds of decisions that should not be automated, and perhaps cannot be.” Entrusting decisions to algorithms when they should be made by humans is a political decision; this means scale is politics. That is something that Facebook’s founder is well aware of.

Mark Zuckerberg’s speechwriter from 2009 to 2011, Kate Losse, says that one of his favorite sayings during her time with him was “companies over countries.” The statement could be brushed off as the braggadocio of a young billionaire. It can also be seen as a foundational principle of technology’s pursuit of scale as politics. It is best to think of it as both. The politics of platform scale is similar to the politics of “too big to fail” that made banks impervious to the risks of their own making during the 2008 financial crisis. There is a lot to be said about whether banks should have been bailed out and who paid the long-term cost for doing so. But it is at least within the realm of reason to accept that financial institutions are truly so intertwined with U.S. policy, militarization and geopolitics that defending their scale is a matter of national interest. It’s hard to make a similar case for Facebook. Zuckerberg may well will Facebook’s inevitably into being, but we still have time to determine if we should govern Facebook as if it is inevitable.

The inevitability question is complicated by another dimension of scale: that Facebook is not just a U.S. political problem. When Facebook went down this week so did the company’s other platforms, Instagram and WhatsApp. The outage brought into focus the divide between different groups’ experience of Facebook’s politics. For many Americans, Facebook going down is an inconvenience; there were memes about rediscovering one’s husband, writing deadline or bookshelf during the hourslong Facebook outage. But internationally, WhatsApp is a primary messaging service. It’s critical infrastructure for the federal government in the Philippines and hospitals in India. Immigrants in the United States worried about contacting their family back home in places like Malaysia, Ghana and Brazil. But the fault lines in how people use Facebook were also made visible in other domains, like that of disabled people who worried about communicating with their friends, families and caregivers on free-to-use platforms.

My U.N.C. colleague Matt Perault told me this week that tech policy is like all policymaking in that it is cost-benefit analysis. That is to say, good policy accepts the trade-offs between insufficient but practical regulations for some agreed-upon, if incomplete, social benefit. Matt’s insight comes from his former post as a director of public policy at Facebook and now as director of a U.N.C. lab on information technology policy. It’s a useful lens through which to view the comments made by the Facebook whistle-blower Frances Haugen in congressional testimony this week. She testified that the company “chooses profit over safety,” and explained that it conducted its own research on platform affordances that encourage dangerous behaviors, such as eating disorders and self-harm. Despite this research, Facebook chooses to develop affordances that generate attention, which in turn generates profit, even when those affordances are dangerous for some users.

Siva Vaidhyanathan is a professor at the University of Virginia and foremost expert on the social and cultural implications of Facebook’s political dominance. On a recent podcast with Virginia Heffernan, another media scholar, Siva characterized Haugen’s testimony as equivalent to the smoking gun documents that felled the tobacco industry. In the case of Big Tobacco, we decided that the smoking was enjoyable but was also dangerous to public health. We made a cost-benefit analysis of imperfect trade-offs and chose collective well-being. Some people were hurt by that trade-off. People with a physical addiction had to pay more for their vice, for example. But the trade-off was made. Paying attention to technology policy and debates about Facebook may have seemed niche 10 or even five years ago. With the last week — from outages to congressional testimony — it is clear to me that now is the time for every informed citizen to have a position on regulating Facebook. We should be guided by understanding the trade-offs and whom they affect.

If we decide to regulate Facebook, some people will lose a critical if predatory communication platform. Poor people, disabled people and the global south will likely, as they often do, bear the brunt of rolling back bad policy decisions. And in countries where Facebook’s business dominance has become the national communication and economic infrastructure, marginalizations will be compounded. A Facebook scaled down by meaningful regulation might not have the incentives to surface hate speech, disinformation and controlling images like those that lead to disordered eating. It will almost certainly have less amplification power to compromise democratic elections or target your family members with financial scams or conspiracy theories. The question for us is whether the upsides are worth it, and whether we can build systems to insulate the vulnerable from the downsides.

Tressie McMillan Cottom is an associate professor at the University of North Carolina at Chapel Hill School of Information and Library Science, the author of Thick: And Other Essays and a 2020 MacArthur fellow.

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *