Deplatforming Trump Puts Big Tech Under Fresh Scrutiny

Photo illustration of Donald Trump and the Twitter logo. Photo by Jaap Arriens/NurPhoto via Getty Images.
Photo illustration of Donald Trump and the Twitter logo. Photo by Jaap Arriens/NurPhoto via Getty Images.

The ‘deplatforming’ of Donald Trump – including Twitter’s announcement that it has permanently banned him due to ‘the risk of further incitement of violence’ after the riots in the US – shows once more not only the sheer power of online platforms but also the lack of a coherent and consistent framework for online content governance.

Taking the megaphone away from Trump during the Capitol riots seems sensible, but was it necessary or proportionate to ban him from the platform permanently? Or consistent with the treatment of other ‘strongmen’ world leaders such as Modi, Duterte and Ayatollah Ali Khamenei who have overseen nationalistic violence but whose accounts remain intact?

Such complex decisions on online expression should not made unilaterally by powerful and unregulated tech actors, but instead should be subject to democratic oversight and grounded in the obligations of states and responsibilities of companies under international human rights law.

The speed and scale of digital information has left governments across the world struggling with how to tackle online harms such as hate speech, extremist content and disinformation since the emergence of mass social media 15 years ago.

The US’s hallowed approach to the First Amendment, under which speech on public issues – even hate speech – occupies the highest rank and is entitled to special protection, has contributed to a reluctance to regulate Silicon Valley’s digital platforms. But the irony is that by not regulating them, the government harmed freedom of expression by leaving complex speech decisions in the hands of private actors.

Meanwhile at the other extreme is the growing number of illiberal and authoritarian governments using a combination of vague laws, censorship, propaganda, and internet blackouts to severely restrict online freedom of expression, control the narrative and, in some cases, incite atrocities.

Regulation is on the way

The happy medium – flexible online content regulation providing clarity, predictability, transparency, and accountability – has until now been elusive. But even before the deplatforming of Trump, 2021 was set to be the year when this approach finally gained some traction, at least in Europe.

The EU’s recently-published draft Digital Services Act puts obligations on dominant social media platforms to manage ‘systemic risks’, for example through requirements for greater transparency about their content decisions, algorithms used for recommendations, and online advertising systems.

The UK will shortly publish its Online Safety Bill, which will establish a new regulatory framework for tackling online harms, including the imposition of a duty of care and codes of conduct on Big Tech, to be overseen by an independent regulator (Ofcom).

Both proposals are based on a ‘co-regulatory’ model under which the regulator sets out a framework substantiated with rules by the private sector, with the regulator performing a monitoring function to ensure the rules are complied with.

Both also draw on international human rights standards and the work of civil society in applying these standards in relation to the online public square, with the aim of increasing control for users over what they see online, requiring transparency about tech companies’ policies in a number of areas, and strengthening the accountability of platforms when they fall foul of the regulation.

The procedure for both proposals has also been inclusive, involving extensive multi-stakeholder consultations with civil society organizations and Big Tech, and the proposals will be subject to scrutiny in 2021, notably from the EU and UK parliaments.

Both proposals are at an early stage, and it remains to be seen whether they go far enough – or indeed will have a chilling effect on online platforms. But as an attempt to initiate a dialogue on globally coherent principles, they are positive first steps. They also provide food for thought for the new Joe Biden administration in the US as it turns its attention to the regulation of Big Tech.

For some time civil society – most prominently David Kaye, the former UN Special Rapporteur on freedom of expression and opinion – have called for content regulation to be informed by universal international human rights law standards.

The EU and UK are peculiarly well-placed to take the lead in this area because European countries have for decades been on the receiving end of judgments from the European Court of Human Rights on the appropriate limits to freedom of expression in cases brought under the European Convention on Human Rights.

In deciding these cases, the court has to balance the right to freedom of expression against the restrictions imposed – for example in the context of incitement to violence, political debate, and satire. Deciding where to draw the line on what can and cannot be expressed in a civilised society which prizes freedom of expression is inevitably a difficult exercise.

International human rights law provides a methodology that inquires whether the interference to freedom of expression was prescribed by law and pursues a legitimate aim, and also whether it was necessary in a democratic society to achieve those aims – including whether the interference was necessary and proportionate (as for example in Delfi AS v Estonia, which involved a news portal failing to take down unlawful hate speech).

To be effective, online content regulation has to bite on tech companies, which is a challenge given the internet is global but domestic law normally applies territorially. The EU’s proposals have an extraterritorial element as they apply to any online platforms providing services in the EU regardless of where the platform is headquartered.

Further, both the EU and UK want to give the regulator strong enforcement powers – it is proposed for example that Ofcom will have powers to fine platforms up to ten per cent of their turnover for breaches.

Although the proposals would not apply directly to the deplatforming of Trump which occurred in the US, the philosophy behind the EU and UK approach is likely to have an impact beyond European shores in promoting a co-regulatory model that some of the bigger tech companies have been inviting for some time, reluctant as they are to ‘play God’ on content moderation decisions without reference to any regulatory framework.

In the absence of regulation, the standards of tech platforms such as Facebook and Twitter have already evolved over time in response to pressure from civil rights groups, users, and advertisers, including updated policies on protecting civic conversation and hate speech.

Facebook has also set up an independent Oversight Board, whose members include leading human rights lawyers, to review decisions on content including – at its own request – the decision to indefinitely suspend Trump from Facebook and Instagram. Decisions on the Board’s first tranche of cases are expected imminently.

Gatekeeper status is key

Online content regulation also needs to address the role of Big Tech as the ‘digital gatekeepers’, because their monopoly power extends not just to editorial control of the news and information we consume, but also to market access.

The decision of Apple, Google, and Amazon to stop hosting right-wing social network Parler after it refused to combat calls for violence during the US Capitol riots was understandable in the circumstances, but also underlined the unilateral ability of Big Tech to decide the rules of the market.

Again, it is Europe where efforts are underway to tackle this issue: the EU’s draft Digital Market Act imposes obligations on online gatekeepers to avoid certain unfair practices, and the UK’s new Digital Markets Unit will have powers to write and enforce a new code of practice on those technology companies with ‘substantial and enduring’ market power.

In the US, Biden’s team will be following these developments with interest, given the growing bipartisan support for strengthening US antitrust rules and reviving antitrust enforcement. The EU’s recently published proposals for an EU-US tech agenda include a transatlantic dialogue on the responsibility of tech platforms and strengthened cooperation between antitrust authorities on digital markets.

Ultimately a consistent – and global – approach to online content is needed instead of fragmented approaches by different companies and governments. It is also important the framework is flexible so that it is capable of applying not only to major democracies but also to countries where too often sweeping state regulation has been used as a pretext to curtail online expression online.

The pursuit of a pluralistic framework tailored to different political and cultural contexts is challenging, and international human rights law cannot provide all the answers but, as a universal framework, it is a good place to start. The raft of regulatory measures from the EU and UK means that, regardless of whether Trump regains his online megaphone, 2021 is set to be a year of reckoning for Big Tech.

Harriet Moynihan, Senior Research Fellow, International Law Programme.

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *