Europe’s Data Protection Law Is a Big, Confusing Mess

Europe’s Data Protection Law Is a Big, Confusing Mess

There is a growing realization that our data is under attack. From breaches at Equifax to Cambridge Analytica’s misuse of the profile information of more than 87 million Facebook users, it seems as if none of our personal data is safe. And more and more about us is being captured, stored and processed by smart devices like thermostats, baby monitors, WiFi-connected streetlights and traffic sensors.

In the United States, people who are concerned are looking to Europe. They see Europe’s “right to be forgotten,” by which citizens can force companies to erase some of their personal data, as a step toward regaining ownership of their online selves. And on May 25, the European Union will bring into force the most sweeping regulation ever of what can be done with people’s data.

This law, the General Data Protection Regulation, will give citizens greater control over their data while requiring those who process personal data in the European Union or about its citizens to take responsibility for its protection. The G.D.P.R. will give Europeans the right to data portability (allowing people, for example, to take their data from one social network to another) and the right not to be subject to decisions based on automated data processing (prohibiting, for example, the use of an algorithm to reject applicants for jobs or loans). Advocates seem to believe that the new law could replace a corporate-controlled internet with a digital democracy.

There’s just one problem: No one understands the G.D.P.R.

The law is staggeringly complex. After three years of intense lobbying and contentious negotiation, the European Parliament published a draft, which then received some 4,000 amendment proposals, a reflection of the divergent interests at stake. Corporations, governments and academic institutions all process personal data, but they use it for different purposes.

There’s another reason for the regulation’s complexity and ambiguity: What are often framed as legal and technical questions are also questions of values. The European Union’s 28 member states have different historical experiences and contemporary attitudes about data collection. Germans, recalling the Nazis’ deadly efficient use of information, are suspicious of government or corporate collection of personal data; people in Nordic countries, on the other hand, link the collection and organization of data to the functioning of strong social welfare systems.

Thus, the regulation is intentionally ambiguous, representing a series of compromises. It promises to ease restrictions on data flows while allowing citizens to control their personal data, and to spur European economic growth while protecting the right to privacy. It skirts over possible differences between current and future technologies by using broad principles.

But those broad principles don’t always accord with current data practices. The regulation requires those who process personal data to demonstrate accountability in part by limiting data collection and processing what is necessary for a specific purpose, forbidding other uses. That may sound good, but machine learning, for example — one of the most active areas of research in artificial intelligence, used for targeted advertising, self-driving cars and more — uses data to train computer systems to make decisions that cannot be specified in advance, derived from the original data or explained after the fact.

In 2017, the year after the regulation was approved, I interviewed scientists, data managers, legal scholars, lawyers, ethicists and activists in Sweden. I learned that many scientists and data managers who will be subject to the law find it incomprehensible. They doubted that absolute compliance was even possible.

One expert at Sweden’s national bioinformatics platform said: “We often wonder, like, what does the law say about this? Nobody knows.” Or as a scientist in charge of computing and storage facilities at a major university put it, the G.D.P.R. says, more or less, “that adequate safety should be in place, and so on. Right — what does that mean?”

Many of the law’s broad principles, though they avoid references to specific technologies, are nevertheless based on already outdated assumptions about technology. “I think it’s very clear that they imagined some company that has your data physically stored somewhere, and you have the right to take it out,” a law professor told me of the G.D.P.R.’s approach to data portability. But in the era of big data and cloud services, data rarely exists in only one place.

What the regulation really means is likely to be decided in European courts, which is sure to be a drawn-out and confusing process.

Still, the G.D.P.R. is not a lost cause. We do need rules about data. But legal frameworks, particularly when they are long, complex and ambiguous, can’t be the only or even the primary resource guiding the day-to-day work of data protection.

If the ultimate goal is to change what people do with our data, we need more research that looks carefully at how personal data is collected and by whom, and how those people make decisions about data protection. Policymakers should use such studies as a basis for developing empirically grounded, practical rules.

In the end, pragmatic guidelines that make sense to people who work with data might do a lot more to protect our personal data than a law that promises to change the internet but can’t explain how.

Alison Cool is a professor of anthropology and information science at the University of Colorado, Boulder.

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *