Should future wars be fought by killer robots?

This is a republished and updated version of this piece from last year.

Every discussion of robots and warfare will always come back to one, or both, of two science fiction touchstones: Skynet and Asimov.

“Skynet”, the artificial intelligence defence system described in the Terminator films, gains self-awareness and immediately attempts to wipe out humanity. In Isaac Asimov’s robot stories, he imagines “three laws of robotics”, the first of which instructed all robots: “A robot may not injure a human being or, through inaction, allow a human being to come to harm”. Those are the options, popularly understood: robot murderers trying to destroy mankind; or pacifistic automatons barred by their programming from hurting humans at all.

If a group of activists with the splendid name “The Campaign to Stop Killer Robots” is to be believed, we are now at a watershed: a decision point, at which we can choose between, crudely speaking, a version of one or other of these two futures.

A special UN meeting in Geneva this week is discussing the use of “lethal autonomous weapons”: battle robots, to you and me. It isn’t all that long ago that this would have been a matter for science fiction, just as the Terminator is, but in recent years it has become an increasingly imminent concern.

The use of drones, or “unmanned combat air vehicles” (UCAVs), in warfare and assassinations has become widespread, if controversial. Two dozen countries are known or believed to use armed unmanned aircraft of varying degrees of sophistication. Britain and America use heavily armed, high-flying drones such as the MQ-9 Predator, which can stay in the air for 14 hours at a time while carrying more than half a ton of laser-guided bombs and missiles; since Barack Obama became president, one US senator estimates, more than 4,000 people have been killed in American military drone strikes.

The drones, however, have human pilots – often sitting thousands of miles away, using joypads and screens as though they were playing computer games, but nonetheless exercising direct control over the weapons they unleash. Last year, the Northrop Grumman X-47B began a new round of naval testing. While usually piloted by humans, it can fly entirely without human input, and is the first automated aircraft to land on an aircraft carrier – which is tricky enough to do for ordinary, manned aircraft.

“The main military advantage of automated weapons over existing drones is that they’re quicker to respond, and therefore could be used in greater numbers, without humans having to control every single one,” says Prof Michael Clarke, the director general of the Royal United Services Institute and a defence adviser to the Government.

The concern is that such weapons, divorced from the human decision-making process, will make killing that much more of an automated process: press a button, and some number of hours later, someone in a distant country will explode, perhaps while surrounded by civilians.

“What we are talking about, however, is fully automated machines that can select targets and kill them without any human intervention,” Noel Sharkey, a professor of artificial intelligence and one of the founders of the Campaign to Stop Killer Robots, told the Telegraph’s Harriet Alexander last year. “And that is something we should all be very worried about.”

“The argument is one of degree,” says Paul Cornish, professor of strategic studies at the University of Exeter. “There’s no such thing as completely automated weapons. They’re built, targeted, launched and given authority by humans, so humans are still in the loop.”

At the moment, all drones still require an active decision to fire. In future, says Prof Clarke, it might be a bit more complicated. “The rules of engagement will most likely be that a target would be defined, and then a particular time-frame and area would be identified where the system would make its own decisions,” he says. “Outside that specific window the robot will not be authorised or allowed to fire. Further, or wider, engagement would have to be re-authorised.”

In Britain and America, and the rest of the Western world, the use of any weapon at all is carefully limited by military law. “Military lawyers are on hand right at the last minute when bombers and drones fire,” says Prof Cornish. “They’re there to check whether the laws of war, regarding proportionality and discrimination and so on, are suitably observed.”

The legality of the use of weapons is one thing; the morality is another. The question of the ethics of robots is a complicated one: who, in the end, is responsible if a robotic soldier destroys a school instead of a military base? Is it the manufacturer, the designer, the operator? The general in command of the theatre? In the absence of a single, obviously responsible pilot or bomb-aimer or gunner or infantryman, it becomes far harder to make military personnel morally accountable should a missile find the wrong target.

There may, however, be moral advantages to using robots in warfare. Robot soldiers, as The Economist pointed out in 2012 in a discussion of the ethics of automated warfare, will not carry out revenge attacks on civilians, or rape people, or panic in the heat of battle and shoot their allies. When artificial intelligence systems are better, they might be able to distinguish more quickly and reliably between threats and civilians, and to therefore reduce collateral damage. A similar argument is being had over the morality, and legality, of robot cars: although they will probably save lives, because they will react faster to avoid crashes and not drive recklessly, when they do go wrong there will not be someone obviously at fault.

The morality of it may not come into it. The world is quite some way from entirely autonomous weapons anyway, but Prof Cornish suspects that the military will be extremely wary of using them, for operational rather than ethical reasons.

“My sense, and perhaps it’s a bit old-fashioned, is that military operators wouldn’t want these things out of control,” he says. “You want the remotely controlled delivery of lethality, but you don’t want the thing going off on its own. It could compromise operations, or reveal something you’ve been wanting to keep quiet. I just don’t think the military mind would like the idea of a lethal weapon going off on its own on the battlefield: they’ll want it to be connected by a link of some sort, and if the link breaks they’ll want it to disengage.”

We don’t, certainly for the time being, need to worry about programming a simulacrum of morality into robot soldiers, he says, because military commanders would be unwilling to unleash those robot soldiers without human oversight anyway.

That may change as the weapons become more sophisticated. But their actions will still be limited by military law. That is, at least, in the West, where these weapons are closest to being developed. “We have to assume the Chinese will have their own version at some point,” says Prof Clarke. “Any breakthrough, especially military breakthroughs, indicates to others that it can be done.” The second and third countries to develop a new weapon system are usually much faster to do so than the first.

“We don’t know whether the Chinese will have the same stringent rules of engagement as Britain and America,” says Clarke. “They last fought in 1979, in Vietnam, and it was a different world, technologically. But if other forms of Chinese behaviour are any guide, they may have a different attitude to the rule of law to ours, to put it mildly.”

The Campaign to Stop Killer Robots, then, might have reasons to be worried about the proliferation of automated weapon systems, in those countries less concerned about Geneva conventions. But in the West, automated weapons would be just as subject to the laws of war as all the tools we use already.

Whether those laws are too stringent or not stringent enough is a question for another time. But, says Prof Cornish: “In recent years, the legal oversight of military operations has become so intense that the notion that we could unleash a lethal force with no such oversight is complete nonsense.”

They’re not Asimov’s peaceful robots. But, for legal reasons, they’ll never be Terminator-style automatic murderers either.

Tom Chivers is the Telegraph's Assistant Comment Editor, and first joined the Telegraph in 2007. He writes on science, technology and other arbitrarily selected oddments. He is particularly proud of exposing The Da Vinci Code author Dan Brown's worst sentences to a disbelieving world.

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *