Forget Cambridge Analytica. What about Facebook’s role in ethnic strife and genocide?

A big reckoning is coming for Facebook. The revelation that political consultancy Cambridge Analytica covertly accessed data from 50 million Facebook accounts to help the Trump campaign in 2016 is just the latest in a series of damaging stories about the platform. Now U.S. regulators and lawmakers are sharpening their knives. Two Democratic senators, Mark R. Warner (Va.) and Amy Klobuchar (Minn.), have said that it’s time for Facebook CEO Mark Zuckerberg to testify before Congress.

It’s understandable that Washington politicians should be concerned primarily about the possible damage inflicted upon U.S. citizens by Facebook policies (or its laxness in enforcing them, which may well be the issue in the Cambridge Analytica case). But if the legislators can persuade the elusive Zuckerberg to appear before their committees, they should also take the opportunity to question him about Facebook’s global role – and specifically about allegations that the platform has allowed itself to become a tool for the instigators of hate speech, ethnic conflict and even genocide.

That last accusation might seem like a stretch. But that was precisely the concern raised earlier this month by officials at the United Nations. Starting late last summer, the military in Burma (also known as Myanmar) launched a campaign of intimidation and violence that has now driven nearly 700,000 members of the Muslim minority group known as the Rohingya out of the country and into neighboring Bangladesh — a crime that more and more international observers (including some at the United Nations) are describing as potentially tantamount to genocide.

Yanghee Lee, a U.N. official charged with investigating events in the country, has said that Facebook’s overwhelming popularity in Burma makes it a key factor in the spread of hate speech. “It was used to convey public messages but we know that the ultra-nationalist Buddhists have their own Facebooks and are really inciting a lot of violence and a lot of hatred against the Rohingya or other ethnic minorities,” she said. “I’m afraid that Facebook has now turned into a beast, and not what it originally intended.”

That’s putting it mildly. As the Burmese military launched its campaign against the Rohingya this past August, Facebook pages across the country brimmed over with bigotry and disinformation targeting the group. The ultra-nationalist Buddhist monk Ashin Wirathu used his own page to purvey racist tirades and imagery until Facebook took down his account. “Wirathu likened Muslims to mad dogs and posted pictures of dead bodies he claimed were Buddhists killed by Muslims,” The Post’s Annie Gowen and Max Bearak reported in December, “while never acknowledging brutality faced by the Rohingya.”

Facebook is starting to acknowledge the problem and says it’s making efforts to tackle it. “We don’t allow hate speech and incitement to violence on Facebook,” a company spokesperson told me. “However, our policies do allow content that may be controversial and at times distasteful, which may include criticism of public figures, religions, and political ideologies.” The company says it has been working with activists around the world and trying to educate users about the dangers of hate speech.

So what sort of moral responsibility does the company bear for its role as a delivery vehicle for speech, destructive and otherwise? It’s true that we’re only just beginning to realize the complexity of the ethical, legal and practical dilemmas involved.

Yet our own confusion probably doesn’t sound like much of an excuse to the people in South Sudan who have watched some of their own transform Facebook into yet another weapon of that country’s bloody civil war. The same goes for the journalists and activists in the Philippines who have endured vicious online harassment from the legions of Facebook trolls who aggressively support President Rodrigo Duterte — including his extrajudicial campaign to kill suspected drug dealers.

Kenyans worry that the social media platform has aggravated ethnic tensions during contentious presidential elections there. “Facebook, notably from 2013 and more conspicuously in 2017, has been a site for ethnic hatred and incitement,” Michael M. Ndonye, a university lecturer on communications, said in an email. While traditional media face regulatory limits and the threat of lawsuits, he said, “Kenyans take to Internet platforms and Facebook to speak out about what the mainstream media shies away from, including hate speech and related content.”

Social media companies certainly can’t afford to sit on their hands amid growing public indignation about their perceived reluctance to address abuses. Just a week ago, the government in Sri Lanka directly blamed the death of three people on Facebook’s failure to control hate speech that contributed to communal violence between Buddhists and Muslims in the country’s interior. Government officials ultimately resorted to temporarily blocking access to platforms including Facebook and WhatsApp in an effort to stop the bloodshed. “This whole country could have been burning in hours,” Telecommunications Minister Harin Fernando told the Guardian. “Hate speech is not being controlled by these organizations and it has become a critical issue globally.”

Yet the people in these countries above have relatively limited power over a company that’s based in Menlo Park, Calif. What they do have is a strong interest in maintaining the connectivity and access to knowledge that comes with participation in globe-spanning communications platforms. Now, however, they’re also losing patience with those same firms’ unwillingness or incapacity to constrain the sorts of speech that threaten to tear societies part. Perhaps it’s time that U.S. lawmakers helped to repair the damage.

Christian Caryl is an editor with The Post's Global Opinions section. Follow @ccaryl

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *