New UK bill can fight fresh wave of online racist abuse

Giant digital mural in Trafford Park, Manchester supporting England footballers Marcus Rashford, Jadon Sancho, and Bukayo Saka who were targeted with racist abuse online after the Euro 2020 final. Photo by Charlotte Tattersall/Getty Images.
Giant digital mural in Trafford Park, Manchester supporting England footballers Marcus Rashford, Jadon Sancho, and Bukayo Saka who were targeted with racist abuse online after the Euro 2020 final. Photo by Charlotte Tattersall/Getty Images.

The ugly online abuse targeted at members of the England football team following the Euros final, and then at Lewis Hamilton after the British Grand Prix, was not only hateful to the individuals concerned, but divisive for the UK more broadly.

More needs to be done to regulate online platforms to avoid the spread of such abuse at scale. Online platforms are making increasing efforts to ‘self-regulate’ in order to tackle online abuse. Over the past year, Facebook and Twitter have strengthened their policies on hateful speech and conduct, such as Facebook’s policy banning Holocaust denial. Both have become more vigilant at deplatforming those who violate their terms of service, such as Donald Trump, and at removing online abuse using a combination of machines and humans.

Twitter announced in the 24 hours following the Euros final that it had removed more than 1,000 tweets, and permanently suspended several accounts, for violating its rules. But inevitably not all abusive posts are picked up given the scale of the issue and, once the post has been seen, arguably the damage is done.

Platforms have also partnered with NGOs on initiatives to counter hate speech and have launched initiatives to tackle the rise in coordinated inauthentic behaviour and information operations that seek to sow distrust and division. But while these efforts are all laudable, they are not enough.

The root of the problem is not the content but a business model in which platforms’ revenue from advertising is directly linked to engagement. This encourages the use of ‘recommender’ algorithms which amplify divisive content by microtargeting users based on previous behaviour, as seen not just with racist abuse but also other toxic content such as anti-vaccination campaigns. Abusers can also remain anonymous, giving them protection from consequences.

Creating a legal duty of care

The UK government’s Online Safety Bill, published in May 2021, aims to tackle harmful content online by placing a duty of care on online platforms to keep users safe and imposing obligations tailored to the size, functionality, and features of the service.

Social media companies will be expected to comply with their duties by carrying out risk assessments for specified categories of harm, guided by codes of practice published by the independent regulator, OFCOM. The bill gives OFCOM the power to fine platforms up to £18 million or ten per cent of global turnover, whichever is higher, for failure to comply.

Following the Euros final, the UK government spoke of referring some racist messages and conduct online to the police. But only a small proportion of it can be prosecuted given the scale of the abuse and the fact only a minority constitutes criminal activity. The majority is ‘lawful but harmful’ content – toxic and dangerous but not technically falling foul of any law.

When addressing ‘lawful but harmful’ material, it is crucial that regulation negotiates the tension between tackling the abuse and preserving freedom of expression. The scale at which such expression can spread online is key here – freedom of speech should not automatically mean freedom of reach. But it is equally important that regulation does not have a chilling effect on free speech, as with the creeping digital authoritarianism in much of the world.

The Online Safety Bill’s co-regulatory approach aims to address these tensions by requiring platforms within the scope of the bill to specify in their terms and conditions how they deal with content on their services that is legal but harmful to adults, and by giving the regulator powers to police how platforms enforce them. Platforms such as Facebook and Twitter may already have strong policies on hate speech – now there will be a regulator to hold them to account.

Devil is in the detail

How successful OFCOM is in doing so will depend on the precise powers bestowed on it in the bill, and how OFCOM chooses to use them. It’s still early days - the bill will be scrutinized this autumn by a committee of MPs before being introduced to parliament. This committee stage will provide an opportunity for consideration of how the bill may need to evolve to get to grips with online abuse.

These latest two divisive and toxic episodes in UK sport are only likely to increase pressure from the public, parliament, and politicians for the bill to reserve robust powers for OFCOM in this area. If companies do not improve at dealing with online abuse, then OFCOM should have the power to force platforms to take more robust action, including by conducting an audit of platforms’ algorithms, enabling it to establish the extent to which their ‘recommender’ settings play a part in spreading hateful content.

Currently, the bill’s definition of harm is confined to harm to individuals, and the government has stated it does not intend this bill to tackle harm to society more broadly. But if racist abuse of individuals provokes racist attacks more widely, as has happened, the regulator should be able to take that wider context into account in its investigation and response.

Responses to the draft bill so far indicate challenges ahead. Some argue the bill does not go far enough to tackle online abuse, especially on the issue of users’ anonymity, while others fear the bill goes too far in stifling freedom of expression, labelling it a recipe for censorship.

Parliamentary scrutiny will need to take into account issues of identity, trust, and authenticity in social networks. While some call for a ban on the cloak of anonymity behind which racist abusers can hide online, anonymity does have benefits for those in vulnerable groups trying to expose hate.

An alternative approach gaining attention is each citizen being designated a secure digital identity, which would both provide users with greater control over what they can see online and enable social media platforms to verify specific accounts. Instituted with appropriate privacy and security safeguards, a secure digital ID would have benefits beyond social media, particularly in an online COVID-19 era.

The online public square is global so countries other than the UK and international organizations must also take measures. It is encouraging to see synergies between the UK’s Online Safety Bill and the EU’s Digital Services Act, published in draft form in December 2020, which also adopts a risk-based, co-regulatory approach to tackling harmful online content. And the UK is using its G7 presidency to work with allies to forge a more coherent response to internet regulation at the international level, at least among democratic states.

Addressing the scourge of online hate speech is challenging so the UK’s Online Safety Bill will not satisfy everyone. But it can give the public, parliament, and politicians a structure to debate these crucial issues and, ultimately, achieve more effective ways of tackling them.

Harriet Moynihan, Acting Director, International Law Programme.

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *