Even Without Cambridge Analytica, the Trump Campaign Already Had Everyone’s Data

An installation from the Big Bang Data exhibition at Somerset House in London in 2016. Photo: Getty Images.
An installation from the Big Bang Data exhibition at Somerset House in London in 2016. Photo: Getty Images.

Revelations that Cambridge Analytica may have enabled the Trump campaign to access the data of more than 50 million people during the US presidential election have caused concern. But a narrow focus on Cambridge Analytica alone masks the risks to democracy arising from internet platforms’ standard terms, business models and what they know about each and every user.

In a 60 Minutes interview in 2017, the head of social media for the Trump campaign, Brad Parscale, said he relied heavily on Facebook but downplayed the influence of Cambridge Analytica. It was a self-serving narrative that boosted Parscale’s own role in securing the Trump win. But the description rings true.

An aspect of the 2016 election story that has not received as much attention as it should is that Facebook staff – and staff from other tech companies – were embedded in the Trump campaign, ‘teaching us how to make the most of their platforms …every, single secret button, click, technology,’ according to Parscale. He claims that ‘Facebook was the 500-pound gorilla, 80 per cent of the [alleged $94 million] budget kind of thing.’ With Facebook staff’s help, Parscale developed ‘thousands of versions of tweaked ads to maximize response, and different versions of ads were A/B tested with different photos, colours, slogans’.

In this context, reports of the data Cambridge Analytica was using – two-year old surveys relating to only 50 million US Facebook users – pale in comparison to the access the Trump campaign had with Facebook’s blessing.

The reason why Facebook’s influence and advertising revenues have grown so dramatically is that it is so good at its job. For most people, most of the time, Facebook is a wonderful way to keep in touch with friends and share photos, videos and stories. Its user interfaces, even its terms and privacy centre, communicate complex messages beautifully – making it easy for a user to know what to do.

Facebook is able to help advertisers reach susceptible targets because of all the data generated by its 2 billion monthly users. Facebook obtains most of those data directly from users with their consent (or at least, an illusion of consent), from users’ devices, location, updates, and from third-party data companies. The Facebook terms of service and data policies offer a stunning level of intrusion, especially considering how much time people spend on the platform, and what they share, including:

  • All user content (including ostensibly private chat messages) is available for Facebook’s use, including location, device, public information and information from third party services and sites that use Facebook services such as the like button or login.
  • Facebook keeps all user data for as long as they have an account. A user can delete the data only by deleting their account (and the deletion will not cover data that has been shared by others, such as the user’s friends).  Facebook’s download your information tool offers the sobering experience of reviewing private chat messages back from when a user first opened their Facebook account.
  • Facebook’s data policy states that ‘when you use third-party apps…they can access…your list of friends, as well as any information that you share with them’.

Discussion threads on Twitter last weekend debated whether or not the data relating to 50 million Facebook users was a ‘data breach’ or a breach of the researcher’s agreement with Facebook. This discussion ignores the uncomfortable fact that by clicking ‘I agree’ to terms that few will read, each Facebook user has already consented for those data to be shared.

The collision of platforms’ advertising-based business models and mass popular uptake are having, and will continue to have, unpredictable consequences on democracy. Twitter feeds, search results and Facebook walls may seem like neutral spaces. They are not. The companies behind them heavily curate and personalize what people see, drawing on what they know about their location and preferences.

And they know a lot, thanks to their billions of users and algorithms which fund the free-to-use platforms through targeted advertising. A 2015 study demonstrated that with just 10 likes, Facebook knows a user better than most work colleagues, and with 300 likes, better than their spouse.

Online platforms increasingly determine what news people see, what opinions they’re exposed to, and they are frequently drawn into editorial-type decisions about what content should be removed and what should remain. By accident, they have become more like publishers than neutral platforms.

In that context, while Cambridge Analytica’s links to the Trump campaign are concerning, so is the intrusion that people agree to when using popular internet platforms, and the apparently unlimited lifespan of data – such as chat messages – whose real life equivalents are ephemeral and simply disappear. Future electoral rules should prohibit staff from the platforms from becoming embedded in political campaigns, and consumer groups should continue to campaign for fairer terms of service.

Emily Taylor, Associate Fellow, International Security.

Deja un comentario

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *