Mental Autonomy Must be Preserved as Tech Advances

Mural depicting the influence of various social media platforms in Bangalore, India. Photo by MANJUNATH KIRAN/AFP via Getty Images.
Mural depicting the influence of various social media platforms in Bangalore, India. Photo by MANJUNATH KIRAN/AFP via Getty Images.

We like to believe that our thoughts and opinions are private, and that it is up to us how we make up our own minds. Our mental autonomy is a fundamental aspect of what makes us individuals. But it is being threatened as technology increasingly monopolizes attention and directs our thoughts and opinions.

As AI (artificial intelligence) becomes more embedded in daily life, these trends are set to continue. While there is much more AI in the home than a few years ago (smartphones, virtual assistants such as Alexa and Google Assistant, heating controls and so on), in a few years’ time this period may be seen as one in which there is relatively little reliance on AI in daily life. It will also be seen as the moment of opportunity to entrench rules and principles that should act as guardrails around the space in which AI and tech should develop.

Current social media thrives on the ‘attention economy’ – in crude terms, the longer we spend on Facebook, Twitter or YouTube, the more money those companies make. In order to fuel the attention economy, social media harnesses the same mental processes as addictions to gaming, drugs, or alcohol, according to Professor Anna Lembke, a specialist in addiction at Stanford University.

You might have observed this phenomenon yourself, if you’ve found it hard not to check your phone when you catch sight of it, or noticed how much people are interacting with their phones rather than each other at the dinner table, or been concerned about how much time your children spend on TikTok or Snapchat.

As well as impacting how much time we spend on them, technology can also have a marked impact on the views we hold without us being aware of its influence. There is widespread concern about technology’s capacity to reinforce systemic bias, which affects not only how people are treated, but potentially how each of us thinks.

The algorithms which determine what users see on social media are tasked with maximising attention and prioritizing material to which statistically a user is more likely to react – which can often mean more negative, emotive, divisive content rather than reasoned, less sensational discussion.

Political players, both domestic and foreign, have the technological tools deliberately to alter our views on a mass scale without our knowledge, as the Cambridge Analytica scandal exposed, and as social media companies’ reports of coordinated inauthentic behaviour or platform manipulation continue to highlight. This repeated nudging towards polarization and intolerance has the capacity to drive political views towards extremes, and even to directly influence violence and conflict, as was seen in Myanmar.

On the commercial side, as described by Shoshana Zuboff, surveillance capitalism is founded on the market for predicting behaviour and shaping decisions, often without our knowledge. As the internet of things moves into domestic life, businesses from car manufacturers to central heating providers may gather information on their customers, using that knowledge to influence their customers’ views in order to sell more products, and trading data on individual personalities commercially.

There is a risk that, for example, insurance companies could decide how much to charge an individual based on their predictions of how accident-prone that individual’s personality makes them – predictions which may themselves be biased by reference to, for example, race, age, gender, or social background – or even based not on accident propensity, but on their predictions of the highest fee that the specific individual will be prepared to pay. Ultimately, even deeply personal views and decisions – on values, religion, relationships, and major life choices – could be affected by influences hidden in technology.

Awareness of and concern about these problems is increasing, reaching mainstream exposure for example in the recent documentary The Social Dilemma. Tim Kendall, CEO of Moment which aims to help people use their phones in a healthier way, says we need to move from the ‘fossil fuel’ to the ‘clean energy’ versions of social media.

It will be key to change the business model, both of social media platforms and of surveillance capitalism, so that profits are no longer contingent upon maximizing attention and the propensity for hidden shaping of views. Shaping alternative business models will require both a deeper understanding of the issues, informed by empirical evidence, and creative cross-sectoral thinking on responses.

At present, there is no accepted toolbox with which to tackle incursions to our mental autonomy, nor even a shared lexicon with which to discuss them. Discussion of ethics and AI has to date paid little attention to mental autonomy. Ethics officers in companies working on the applications of AI, as well as regulators, should be mindful of the need to protect mental autonomy. The lexicon in this field ought to be the freedoms of thought and opinion, long-enshrined as absolute rights in international human rights law.

These human rights have rarely appeared to be under systemic threat until now. They entail that an individual has the right to keep their thoughts private, for their thinking not to be manipulated, and not to be penalized for their views. The state has a duty to protect these rights and companies have a responsibility to respect them in product design and delivery.

Freedoms of thought and opinion are not the same as privacy rights: it is possible to harness attention without intruding on privacy, and possible to manipulate political thinking without using personally-targeted messaging. But freedoms of thought and opinion have not yet been much discussed or well understood. There is little jurisprudence on these freedoms outside the context of thought as related to religion and conscience.

While lawyers such as Alegre and Aswad are already unpacking what these rights mean and entail, we urgently need more cross-disciplinary scholarship, more guidance from the courts, and more consideration by business and regulators as to where the dividing line should be between legitimate persuasion and illegitimate mind manipulation.

Endorsements from international groupings such as the G7 and the UN Human Rights Council, as well as from regional bodies such as the Council of Europe, a pioneer in considering the manipulative capabilities of algorithmic processes, would help civil society advocate for these rights.

And clear statements on the relevance of freedom of thought and freedom of opinion in the digital age from authoritative expert bodies, such as the UN human rights treaty monitoring bodies and UN human rights Special Procedures, would be valuable, building on the work of Professor David Kaye, former UN Special Rapporteur on Freedom of Opinion and Expression, on the implications of AI technologies for human rights in the information environment.

Most importantly, both business and regulators should pay regard to the freedoms of thought and opinion. Business should have regard to them when developing and deploying tech products, and when conducting human rights or ethics due diligence assessments. Governments should have regard to them in regulating AI, just as they do to privacy and freedom of expression. The attention paid to manipulation and deceptive techniques in the European Democracy Action Plan is a welcome first step.

There are many positives to technological development, but technology and AI should be designed and used in ways that respect mental autonomy, maintaining the freedom to think as we wish and to make up our minds free of hidden influences, and protecting future generations’ capacity to think innovatively and hold views which swim against the majority tide. Vigilance is needed now to ensure George Orwell’s nightmare vision of 1984 will not become a nightmare reality in either 2024, 2054 or 2084.

Kate Jones, Associate Fellow, International Law Programme.

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *