The Disinformation Wave is Just Getting Started – But Are We Ready for It?

The Disinformation Wave is Just Getting Started – But Are We Ready for It?

Over the past two years, a wave of disinformation campaigns has upended democratic electoral systems across the globe, prompting both governments and electorates to demand action to counter the growing prevalence of fake news. So far, several governments have begun enacting laws to address the issue, from Malaysia’s anti-fake news bill passed in April to French President Emmanuel Macron’s advocacy for legislation criminalizing falsified content.

While these clampdowns are highly visible, these responses to a growing and diffused threat from falsified content essentially amount to knee-jerk attempts to declare the practices criminal. Along with the potential to severely restrict free speech through claims of fake news, these laws do not address the underlying factors that enable fake news campaigns to be successful in the first place, such as poor digital and information literacy among the general public.

Likewise, criminal legislation alone will not equip governments or the public against the next wave of disinformation threats derived from emerging technologies, such as “deepfakes.” In order to effectively respond or even counter this threat, more attention must be directed to the intersection between disinformation and emerging technologies, such as artificial intelligence and machine learning.

Deepfakes are digitally manipulated videos, images, and sound files that can be used to appropriate someone’s identity—including their voice, face, and body—to make it seem as if they did something they did not. So far, deepfakes have largely consisted of manipulating images into celebrity sex tapes, but as professors Danielle Citron and Robert Chesney warn, the leap from fake celebrity porn videos to other forms of falsified content is smaller than we think.

Until recently, such realistic, computer-generated content was only available to major Hollywood producers or well-funded researchers. However, rapid advancements in technology have resulted in applications that now allow nearly anyone, regardless of their technical background, to produce high-quality deepfakes that can range from the innocuous—such as depicting a friend in an embarrassing situation—to the incendiary—such as a world leader threatening war. The proliferation of seemingly authentic, but actually manipulated, content at a time when it is already difficult to determine content authenticity is highly concerning.

As the prevalence of disinformation in society has become clearer, governments and non-profits have started to fund research on the impact of fake news on societies and political systems. But this only addresses part of the problem, leaving out key emerging technologies, such as artificial intelligence and machine learning that are already fueling the next disinformation wave. For example, in late March, the Hewlett Foundation announced $10 million for research on digital disinformation and its influence on American democracy, but with no specific calls for research on deepfakes or other emerging technologies. Given the potentially devastating threat deepfakes could pose, this is a missed opportunity to get ahead of the problem and improve our understanding about deepfakes and their potential for harm. Similar initiatives in the European Union heavily emphasize understanding and combating the current brand of fake news, rather than preparing for these more advanced disinformation threats.

These research and development efforts should also go hand-in-hand with strong, public digital information literacy programs on how to identify distorted media and falsified content, including from emerging technologies. In 2017, California lawmakers introduced two bills requiring teachers and education boards to create curricula and frameworks focused on media literacy. However, to have the most impact governments must also engage non-profit and private sector expertise to help the public better understand the technical issues at play, thereby improving their ability to identify real content from fake content.

In its coverage of the rise of deepfakes across the internet, the tech media site Motherboard stated that we are “truly fucked,” predicting that it won’t be long before the public becomes embroiled in chaos over these emerging forms of disinformation. But we don’t have to feed the fear. Rather than pass hasty and ineffective legislation, governments can work with nonprofits and the private sector to direct resources to relevant research on emerging technologies. Equally important will be more support for programs that educate the public on identifying disinformation threats based on both old and new technologies.

Deepfakes are at the cutting edge of the disinformation landscape right now, but who knows for how long? If governments and non-profits act strategically, they could even find themselves ahead of the game.

Spandana Singh, YPFP Cybersecurity & Technology Fellow.

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *