The scenes are shocking.
In the wake of the murder of three young girls in the northwestern town of Southport, England, riots erupted across the country. Seizing on misinformation about the suspect’s identity, far-right rioters embarked on a harrowing rampage, setting fire to cars, burning down mosques, harassing Muslims, looting stores and attacking hotels housing asylum seekers. In an early August weekend, there were over 50 protests and almost 400 arrests. In the week since, hundreds of rioters have been charged and dozens convicted.
The country is stunned. But for all the events’ eye-popping madness, we shouldn’t be surprised. The animosities underpinning the riots — hatred of Muslims and migrants alike — have long found expression in Britain’s political culture, not least under the previous Conservative government whose cornerstone commitment was to “stop the boats” on which migrants made their way to British shores.
Far-right extremists, emboldened by that government’s turn to migrant-bashing, have been waiting for the perfect chance to take to the streets. Crucially, they have found a home online, where platforms — poorly regulated and barely moderated — allow the spread of hate-filled disinformation, whipping up a frenzy. These have been disturbing days. But the chaos has been coming.
Disinformation is at the heart of the riots. In the aftermath of the killings in Southport, users on X posted and shared false claims, stating that the alleged attacker was an asylum seeker who arrived in Britain by boat — when he was in fact born and raised in Wales. On TikTok, far-right users went live and called on one another to gather in protest. Their reach was wide. Thanks to the platform’s aggressively personalized For You page, it is not difficult to get videos in front of users who have already engaged with far-right or anti-migrant content.
The apparatus of assembly extended to messaging services. On Telegram, far-right group chats shared lists of protest locations; one message included the line “they won’t stop coming until you tell them”. In WhatsApp chats, there were messages about reclaiming the streets and taking out “major bases” of immigrant areas in London. These calls to action were quickly amplified by far-right figures like Andrew Tate and Tommy Robinson, the founder of the English Defense League, who took to X to spread lies and foment hate. Almost immediately, people were out on the streets, wreaking havoc.
There was little to stop the outpouring of false claims and hateful language, even after officials released information about the suspect’s identity. Legislation on internet safety is murky and confusing. Last year, the Conservative government passed the Online Safety Act, whose remit is to protect children and force social media companies to remove illegal content. But there is no clear reference in the law to misinformation.
In January, new offenses were introduced to the act, including posting “fake news intended to cause non-trivial harm and other online abuse”. And in the aftermath of the riots, the Labour government is reportedly planning to strengthen the law. These are good developments, to be sure. But the legislation is not yet in force and it’s unclear how it will be enforced.
The bigger problem, though, is that so much in the law hinges on establishing intent, which is famously hard to do. Henry Parker, the vice president of corporate affairs at Logically, a British organization that monitors disinformation online, told me there needs to be much clearer criteria for what constitutes intent and how it can be punished.
This is tricky territory: It’s hard to strike the right balance between protecting freedom of speech and controlling harmful speech. Even so, “it is legitimate for the government to get involved”, Mr. Parker said. “Just as there is a right of freedom of speech, there is a right for people to have access to accurate information”.
In the absence of effective regulation or oversight, social media platforms have played an increasingly central role in radicalizing far-right extremists in Britain. Under Elon Musk, X has allowed far-right users, including the likes of Mr. Robinson, to return to the platform. Since the riots started, Mr. Musk himself has stirred things up, claiming that “civil war is inevitable” and going on a bizarre tirade in a series of posts.
But the real damage has been how he has allowed harmful content to thrive. “X as a platform is uniquely vulnerable to massive-scale disinformation”, Imran Ahmed, founder of the Center for Countering Digital Hate, told me, “because they have basically abandoned enforcement of their rules”. The result is an online world of hate, lies and extremism.
The online world is connected to the offline world, of course. Far-right agitators in Britain are clearly drawing on widespread feelings of Islamophobia, racism and anti-migrant sentiment. In response to the riots, there has been some reticence among public figures to say this clearly. As a Muslim, I roll my eyes every time there are discussions in the media about whether clearly Islamophobic acts — like attacking mosques or threatening women wearing hijabs — are, in fact, Islamophobic. “Unless we identify what’s going on”, Zarah Sultana, an independent lawmaker, told me, “how can we possibly respond to it in the right way?”
Last Wednesday, people answered that question. Across England’s major cities, thousands of people — 25,000, according to one estimate — joined counterprotests to challenge the rioters. The far right, clearly deterred, mostly didn’t turn up. The peaceful mobilization of citizens, gathering in multiethnic areas at immigration centers that were apparently in line for far-right attack, was an apt riposte to violent racism. Together with an expanded police response and energetic prosecutions, it worked to ward off further riots.
Prime Minister Keir Starmer, along with pledging “no letup” in legal action against rioters, has promised that people will be prosecuted for their actions online — and a handful have been convicted of inciting racial hatred. But there’s seemingly little the government can do to hold accountable the social media platforms themselves. These riots, xenophobic outbursts turbocharged by technology, were only a matter of time. The truly scary thing is how little we can do to stop them.
Hibaq Farah, a former technology reporter at The Guardian, is a staff editor in Opinion.