Human Control Is Essential to the Responsible Use of Military Neurotechnology

 A model of a human brain is displayed at an exhibition in Lisbon, Portugal. Photo: Getty Images.
A model of a human brain is displayed at an exhibition in Lisbon, Portugal. Photo: Getty Images.

Technological progress in neurotechnology and its military use is proceeding apace. As early as the 1970s, brain-machine interfaces have been the subject of study. By 2014, the UK’s Ministry of Defence was arguing that the development of artificial devices, such as artificial limbs, is ‘likely to see refinement of control to provide… new ways to connect the able-bodied to machines and computers.’ Today, brain-machine interface technology is being investigated around the world, including in Russia, China and South Korea.

Recent developments in the private sector are producing exciting new capabilities for people with disabilities and medical conditions. In early July, Elon Musk and Neuralink presented their ‘high-bandwidth’ brain-machine interface system, with small and flexible electrode threads packaged into a small device containing custom chips and to be inserted and implanted into the user’s brain for medical purposes.

In the military realm, in 2018, the United States’ Defense Advanced Research Projects Agency (DARPA) put out a call for proposals to investigate the potential of nonsurgical brain-machine interfaces to allow soldiers to ‘interact regularly and intuitively with artificially intelligent, semi-autonomous and autonomous systems in a manner currently not possible with conventional interfaces’. DARPA further highlighted the need for these interfaces to be bidirectional – where information is sent both from brain to machine (neural recording) and from machine to brain (neural stimulation) – which will eventually allow machines and humans to learn from each other.

This technology may provide soldiers and commanders with a superior level of sensory sensitivity and the ability to process a greater amount of data related to their environment at a faster pace, thus enhancing situational awareness. These capabilities will support military decision-making as well as targeting processes.

Neural recording will also enable the obtention of a tremendous amount of data from operations, including visuals, real-time thought processes and emotions. These sets of data may be used for feedback and training (including for virtual wargaming and for machine learning training), as well as for investigatory purposes. Collected data will also feed into research that may help researchers understand and predict human intent from brain signals – a tremendous advantage from a military standpoint.

Legal and ethical considerations

The flip side of these advancements is the responsibilities they will impose and the risks and vulnerabilities of the technology as well as legal and ethical considerations.

The primary risk would be for users to lose control over the technology, especially in a military context; hence a fail-safe feature is critical for humans to maintain ultimate control over decision-making. Despite the potential benefits of symbiosis between humans and AI, users must have the unconditional possibility to override these technologies should they believe it is appropriate and necessary for them to do so.

This is important given the significance of human control over targeting, as well as strategic and operational decision-making. An integrated fail-safe in brain-machine interfaces may in fact allow for a greater degree of human control over critical, time-sensitive decision-making. In other words, in the event of incoming missiles alert, while the AI may suggest a specific course of action, users must be able to decide in a timely manner whether to execute it or not.

Machines can learn from coded past experiences and decisions, but humans also use gut feelings to make life and death decisions. A gut feeling is a human characteristic that is not completely transferable, as it relies on both rational and emotional traits – and is part of the ‘second-brain’ and the gut-brain axis which is currently poorly understood. It is however risky to take decisions solely on gut feelings or solely on primary brain analysis—therefore, receiving a comprehensive set of data via an AI-connected brain-machine interface may help to verify and evaluate the information in a timely manner, and complement decision-making processes. However, these connections and interactions would have to be much better understood than the current state of knowledge.

Fail-safe features are necessary to ensure compliance with the law, including international humanitarian law and international human rights law. As a baseline, human control must be used to 1) define areas where technology may or may not be trusted and to what extent, and 2) ensure legal, political and ethical accountability, responsibility and explainability at all times. Legal and ethical considerations must be taken into account from as early as the design and conceptualizing stage of these technologies, and oversight must be ensured across the entirety of the manufacturing supply chain.

The second point raises the need to further explore and clarify whether existing national, regional and international legal, political and ethical frameworks are sufficient to cover the development and use of these technologies. For instance, there is value in assessing to what extent AI-connected brain-machine interfaces will affect the assessment of the mental element in war crimes and their human rights implications.

In addition, these technologies need to be highly secure and invulnerable to cyber hacks. Neural recording and neural stimulation will be directly affecting brain processes in humans and if an adversary has the ability to connect to a human brain, steps need to be taken to ensure that memory and personality could not be damaged.

Future questions

Military applications of technological progress in neurotechnology is inevitable, and their implications cannot be ignored. There is an urgent need for policymakers to understand the fast-developing neurotechnical capabilities, develop international standards and best practices – and, if necessary, new and dedicated legal instruments – to frame the use of these technologies.

Considering the opportunities that brain-machine interfaces may present in the realms of security and defence, inclusive, multi-stakeholder discussions and negotiations leading to the development of standards must include the following considerations:

  • What degree of human control would be desirable, at what stage and by whom? To what extent could human users be trusted with their own judgment in decision-making processes?
  • How could algorithmic and human biases, the cyber security and vulnerabilities of these technologies and the quality of data be factored into these discussions?
  • How can ethical and legal considerations be incorporated into the design stage of these technologies?
  • How can it be ensured that humans cannot be harmed in the process, either inadvertently or deliberately?
  • Is there a need for a dedicated international forum to discuss the military applications of neurotechnology? How could these discussions be integrated to existing international processes related to emerging military applications of technological progress, such as the Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts on Lethal Autonomous Weapons Systems?

Yasmin Afina, Research Assistant, International Security Department.

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *