The rising threat to democracy of AI-powered disinformation

The rising threat to democracy of AI-powered disinformation
© FT montage/Getty Images/Dreamstime

Two days before the Slovakian election in September, a mysterious recording went viral on social media. In it, liberal opposition candidate Michal Šimečka could apparently be heard plotting with a journalist to buy votes and rig the result.

“It will be done in a way that nobody can accuse you of taking bribes”, Šimečka purportedly says on the audio, according to a transcript of the conversation that also circulated at the time. “Be careful, there are many people around, don’t use the B word”, the journalist replies.

The most explosive thing about the recording was that it was fake — a sophisticated hoax created by artificial intelligence, said fact-checkers, citing indicators such as unnatural diction and atypical pauses.

However, the recording was shared by thousands of voters during the country’s moratorium on reporting on the election, making it harder for Šimečka’s allies or the media to debunk it as fake.

Robert Fico, right, and Slovakian opposition candidate Michal Šimečka speak after an electoral TV debate last September © Vladimir Simicek/AFP via Getty Images
Robert Fico, right, and Slovakian opposition candidate Michal Šimečka speak after an electoral TV debate last September © Vladimir Simicek/AFP via Getty Images

Šimečka, who topped the exit polls, denounced the recording as “colossal, obvious stupidity” and the Slovakian police issued a warning to voters to be cautious online of nefarious actors with “vested interests”. But Šimečka went on to lose in the vote to his populist, pro-Russia rival Robert Fico.

The perpetrators remain unknown and the precise impact on the result is impossible to gauge — but the episode foreshadowed a new dawn for information warfare and a gargantuan challenge for Silicon Valley’s social media platforms ahead of a historic year of elections. An estimated 2bn people, or around half of the world’s adult population, are expected to head to the polls in 2024, including in the US, the EU, India and the UK.

Online disinformation has been a factor in elections for many years. But recent, rapid advances in AI technology mean that it is cheaper and easier than ever to manipulate media, thanks to a brisk new market of powerful tools such as OpenAI’s ChatGPT, AI art start-up Midjourney or other text, audio and video generators. At the same time, manipulated or synthetic media is becoming increasingly hard to spot.

Already, realistic deepfakes have become a new front in the disinformation landscape around the Israel-Hamas and Russia-Ukraine conflicts. Now, they are poised to muddy the waters in electoral processes already tarnished by dwindling public trust in governments, institutions and democracy, together with sweeping illiberalism and political polarisation.

The rising threat to democracy of AI-powered disinformation

“The technologies reached this perfect trifecta of realism, efficiency and accessibility”, says Henry Ajder, an expert on AI and deepfakes and adviser to Adobe, Meta and EY. “Concerns about the electoral impact were overblown until this year. And then things happened at a speed which I don’t think anyone was anticipating”.

Authorities are already issuing warnings. In the UK in November, GCHQ raised the prospect of “AI-created hyper-realistic bots” and increasingly advanced deepfake campaigns ahead of the country’s election. A bipartisan group of US senators recently proposed legislation to ban “materially deceptive AI-generated” content in political advertising.

Social media platforms including Meta, Google’s YouTube, TikTok and X now face pressure to introduce guardrails around deepfakes, curb nefarious actors and ensure they make the correct moderation calls when it comes to highly ambiguous media, while simultaneously remaining non-partisan.

Yet many are less equipped to do so than in previous big elections, experts warn. Some, including Meta, trimmed their investment in teams dedicated to maintaining safe elections after the tech stock downturn in early 2023. In the case of Elon Musk’s X, content moderation resources have been cut back drastically as he vows to restore what he dubs free speech absolutism.

The efforts of the US-based tech groups to invest in fact-checking and tackling misinformation have also become politicised, as rightwing US politicians accuse them of colluding with the government and academics to censor conservative views.

Donald Trump speaks at a campaign rally in 2016. Following his election, the notion of online disinformation wars was thrust into public awareness © Robyn Beck/AFP/Getty Images
Donald Trump speaks at a campaign rally in 2016. Following his election, the notion of online disinformation wars was thrust into public awareness © Robyn Beck/AFP/Getty Images

Multiple left-leaning disinformation experts and academics warn this dynamic is forcing the platforms, universities and government agencies to pull away from election integrity initiatives and collaborations globally for fear of retribution. Meta said in December that the US government had halted its information sharing with the platform, despite such collaborations previously helping the company identify “sophisticated threat actors”.

This toxic mixture of more sophisticated AI tools and less robust prevention measures means that what experts describe as the disinformation doomsday scenario — that a viral undetectable deepfake will have a catastrophic impact on the democratic process — is no longer merely theoretical, many warn.

“I think that the combination of the chaos that the generative AI tools will enable and the drawback of the programmes that the platforms had in place to ensure election integrity is this unfolding disaster in front of our eyes”, says the head of one digital research non-profit. “I’m extremely concerned that the victim will be democracy itself”.


The notion of online disinformation wars was thrust into public awareness in the wake of Donald Trump’s win in the 2016 US presidential election.

US officials later found evidence of co-ordinated online efforts by Russia to influence the vote, whereby tech savvy students were hired to a St Petersburg-based “troll farm” called the Internet Research Agency.

While the campaign went almost entirely unnoticed by the social media platforms, the Russian tactics at the time were crude. IRA staff set up clusters of fake accounts and pages and bought advertising in roubles. False personas attacked Republican Trump or his rival Democratic candidate Hillary Clinton via vulgar memes; others sought to stir up division around topics such as racial tensions, immigration or gun rights, often in broken English.

By 2020, social media disinformation campaigns had operated in more than 80 countries, according to the Oxford Internet Institute — orchestrated variously by political parties, shadowy public relations and private sector intelligence groups, or governments themselves. Some began to experiment with rudimentary deepfaking techniques, particularly digital renderings of fictitious faces.

In response, Google, Meta, TikTok and X introduced rules prohibiting co-ordinated covert influence operations and misinformation about voting and voter suppression. Google bans doctored media related to politics and social issues, while Meta has banned manipulated content that is designed to mislead and allows its fact-checkers to flag if media is “altered”, making it less prominent in users’ feeds.

Elon Musk at the Vivatech technology start-ups and innovation fair in Paris in June. His lawyers unsuccessfully argued last year that a YouTube video of the entrepreneur talking about Tesla’s self-driving capabilities might be a deepfake © Alain Jocard/AFP/Getty Images
Elon Musk at the Vivatech technology start-ups and innovation fair in Paris in June. His lawyers unsuccessfully argued last year that a YouTube video of the entrepreneur talking about Tesla’s self-driving capabilities might be a deepfake © Alain Jocard/AFP/Getty Images

But the advent of generative AI — powerful multi-modal models that can blend text, image, audio and video — has radically transformed the potential for deepfakes, putting the ability to create convincing media at scale within the reach of almost anyone.

Video generator start-ups such as Los Angeles-based HeyGen and London-based Synthesia, catering to a broad range of industries including film and advertising, allow customers to create video clips fronted by AI avatars for a little over $20 a month, in a fast-growing market already worth around half a billion dollars in 2022. The same technology can be used to generate AI-powered news articles, run news sites, automate armies of bots or even create chatbots that might be scripted to eventually rally a certain political sentiment in a user.

“2023 has really been the primer for what we’re going to be seeing [this] year when we’re talking about manipulated media and AI-powered influence ops . . . be they from individual actors or maybe nation state organisations”, says Brian Liston, Intelligence analyst at cyber group Recorded Future.

Deepfake audio of UK opposition leader Keir Starmer berating assistants circulated on the first day of the Labour party conference in October. Google’s YouTube last year suspended several accounts in Venezuela showing videos of fake news anchors reading out disinformation that cast President Nicolás Maduro’s regime in a positive light.

Motives for campaigns vary. Some are designed to sway opinion or discredit candidates. Others, as is common in the latest Russian campaigns, seek to undermine trust in democracy as part of broader, more strategic geopolitical objectives. Then there are those that are merely trying to generate engagement and profit, through advertising clicks for example.

There are also secondary effects. Just as some politicians such as Trump have weaponised the concept of “fake news” by levelling the term at narratives they disagree with, so too can growing public awareness of deepfakes be wielded to discredit truths and deny reality.

Already, crying deepfake is a tactic deployed by legal teams in defence of their clients. Last year, lawyers for Elon Musk suggested that a YouTube video of the billionaire entrepreneur talking about Tesla’s self-driving capabilities might be a deepfake, as part of a lawsuit over the death of a man using the system. The judge said this was not grounds to dismiss the evidence, calling the argument “deeply troubling”.

In the political sphere, candidates now have the “ability to dismiss a damaging piece of audio or video”, says Bret Schafer, a propaganda expert at the Alliance for Securing Democracy, part of the German Marshall Fund think-tank. The concept, known as the “liar’s dividend”, was first outlined in a 2018 academic paper arguing that “deepfakes make it easier for liars to avoid accountability for things that are in fact true”.

Research also shows that the very existence of deepfakes deepens mistrust in everything online, even if it is real. In politics, “there’s an autocratic advantage to attacking the idea that there’s such a thing as objective truth”, Schafer says.

“You can get people to the point of, ‘Voting doesn’t matter. Everybody’s lying to us. This is all being staged. We can’t control any outcomes here.’ That leads to a significant decline and civic engagement”.


Social media companies are attempting to create guardrails for this new kind of activity.

Both Meta and Google recently announced policies requiring campaigns to disclose if their political adverts have been digitally altered. TikTok requires that synthetic or manipulated media that shows realistic scenes must be clearly disclosed through a sticker, label or caption. It bans synthetic media that harms the likeness of real private figures or a public figure if it’s used for endorsements, such as scams.

X, meanwhile, says that it will either label or remove deceptive manipulated media where it causes harm, including causing confusion around political stability.

But preventing this kind of media from being posted and shared is becoming more difficult, even for companies like TikTok and Meta that have invested in high-quality detection capabilities.

Speed and real-time response is key. If confidence in the veracity of a viral narrative is shaky, and platforms are slow to respond, this leaves a dangerous epistemic vacuum, Ajder says, “for bad faith actors and less scrupulous media — particularly biased media — to come in and fill that void with confirmation”.

Ron DeSantis, Republican US presidential candidate, speaks to a crowd in Iowa last August. His attack advert used AI to generate a Trump-like voice © Joseph Cress/USA Today Network via Reuters
Ron DeSantis, Republican US presidential candidate, speaks to a crowd in Iowa last August. His attack advert used AI to generate a Trump-like voice © Joseph Cress/USA Today Network via Reuters

But where academic researchers and moderation teams had previously focused on detecting discrepancies in AI-generated faces, says Renee DiResta, a Stanford research manager, they are now starting again from scratch with much of the latest generative AI technology. “The techniques for detecting fake faces are not relevant to newer diffusion model creations”, she says.

In earlier examples of generated fakes, facial features were “aligned within a grid”, DiResta says, and anomalies were easier to spot. Today’s human deepfakes can be in any pose, and move seemingly naturally. “There is a constant adversarial dynamic where the tech gets better before the detection tech. It’s the ‘red queen problem’ — we run as fast as we can to stay in the same place”.

Experts agree that existing detection tools are generally not accurate enough to act as a reliable and definitive safeguard. “The variance of the percentage of detection is kind of all over the place and there are inconsistencies”, says Liston.

In addition, platforms must continue to fine-tune policies on how to then treat those deepfakes if they are detected. “Do you take that stuff down? Do you label it? Do you reduce the reach?” says Katie Harbath, global affairs officer at Duco Experts and a former Meta public policy director.

She notes that platforms will need nuances in their policies to distinguish between the many benign uses of AI, such as a campaign wanting to use it to generate a particular background for an advert, and the nefarious ones.

Then there are even greyer areas, such as how to handle satire or parody. A Ron DeSantis attack advert in July took Trump’s written posts on his own social platform Truth Social and used AI to generate audio of a Trump-like voice reading them. Should these be acceptable? “Those are, I think, some of the still open questions around all of this that start to get really tricky because the devil is in the details of defining all this”, Harbath says.

Beyond detection, many platforms have been exploring using watermarking or other indicators to assign a signature of authenticity to content before it is published. Both Google and Meta have been introducing invisible watermarking for content generated by their own AI tools.

But sceptics note that this is a work in progress, and only an effective solution if widely and consistently adapted. Meta’s global affairs head Nick Clegg has spoken publicly about the need for common industry standards, while TikTok said it was assessing options for a content provenance coalition.

Experts say disinformation campaigns have shifted their attention to companies that take a lighter approach towards moderation. In particular, X and Telegram is where “a lot of this stuff originates now because [perpetrators] know that the legacy platforms are putting resources into this”, notes Harbath.

X said it was “well-equipped to handle” synthetic and manipulated media, and pointed to its volunteer fact-checking programme, Community Notes. Telegram said it was partnering with fact-checking agencies to add labels to potentially misleading content and accurate information via their channels, adding that it believed that the most effective way to combat misinformation was by “spreading verified information”.

Even within larger platforms, resources for tackling disinformation and election integrity can be stretched thin or are less of a priority in certain geographies. This was one of the central criticisms levelled at Meta by whistleblower Frances Haugen in October 2021. Haugen leaked internal documents to the media showing that non-English language speaking countries had low numbers of moderators, for example.

“Your regulation is only as good as your ability to enforce it”, warns Ajder. “When there’s no meaningful way that platforms can really reliably detect AI-generated content at scale, and also when there’s lots of benign AI-generated content out there, it’s questionable as to whether these policies . . . are actually going to make a huge difference”.

Some of the platforms remain defiant. “The defender community across our society needs to prepare for a larger volume of synthetic content” in 2024, Meta wrote in a November report. But it added that it did not think it would “upend our industry’s efforts to counter covert influence operations”.

For some, there is a different glimmer of hope: that the authenticity crisis will eventually come full circle, with voters returning back to legacy institutions over social media platforms for their information. “If I’m an optimist here, in five years my hope would be that this actually draws people back to trusting, credible, authoritative media”, says Schafer.

But for others, the future appears bleak if we cannot believe what our eyes and ears are telling us. Nothing will be taken at face value; everything interrogated; democracy more fragile.

“The technology is here to stay and will get very, very good”, says Nicolas Müller, machine-learning research scientist at Fraunhofer AISEC. “You will probably not be able to just trust media like audio or video. This will need a paradigm shift in our head. Maybe, like Covid, we just have to live with it”.

Hannah Murphy

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *