This rather serious report should be of interest to LW. It argues that autonomous robotic weapons which can kill a human without an explicit command from a human operator ("Human-Out-Of-The-Loop" weapons) should be banned, at an international level.

 

(http://www.hrw.org/reports/2012/11/19/losing-humanity-0)

New Comment
25 comments, sorted by Click to highlight new comments since:

which can kill a human without an explicit command from a human operator ("Human-Out-Of-The-Loop" weapons)

Like pit-traps?

[-][anonymous]170

and landmines, if you're looking for precedent.

Neither land mines nor pit-traps are "autonomous robotic weapons" of course. But speaking of precedent, there are numerous campaigns to ban land mines (eg. http://www.icbl.org/), for reasons which are rather similar to those advanced in "The Case against Killer Robots".

Modern naval mines probably qualify as 'robotic' in some sense.

I'm not sure what a heat-seeking missile is; a human decided to fire it, but after that it's autonomous. How is that different in principle from a robot which is activated by a human and is autonomous afterwards?

And a bullet is out of human control once you've fired it. Where do you draw the line?

Probably the most important feature is the extent to which the human activator can predict the actions of the potentially-robotic weapon.

In the case of a gun, you probably know where the bullet will go, and if you don't, then you probably shouldn't fire it.

In the case of an autonomous robot, you have no clue what it will do in specific situations, and requiring that you don't activate it when you can't predict it means you won't activate it at all.

Okay, that actually seems like quite a good isolation of the correct empirical cluster. Presumably guided missiles fall under the 'not allowed' category there, as you don't know what path they'll follow under surprising circumstances.

The proposal under discussion has poor definitions, but "autonomous robotic weapons which can kill a human without an explicit command from a human operator" is a good start.

That's at least six different grey areas already (Autonomous, robotic, weapon, able to kill a human, explicit, human operator).

My guess is that bullets fired from current-generation conventional firearms aren't robotic, and also don't pass the explicit command test. That is despite the fact that many firearms discharge unintentionally when dropped- a strict reading would have them fail that test.

Finally, the entire legislation could be replaced by legislation banning war behavior in general, and it would be equally effective.

That is despite the fact that many firearms discharge unintentionally when dropped- a strict reading would have them fail that test.

Is this true? My impression is that almost all modern firearms are designed to make this extremely unlikely.

Extremely unlikely, with a properly designed, maintained, and controlled firearm. A worn out machine pistol knockoff can have a sear that consistently drops out when struck in the right spot. There's a continuum there, and a strict enough reading of 'explicit command from a human operator' would be that anything that can be fired accidentally crosses the line.

For that matter, runaway is a common enough occurrence in belt-fed firearms that learning how to minimize the effects is part of learning to use the weapon. (Heat in the chamber is enough to cause the powder to ignite without the primer being struck by the pin; the weapon continues to fire until it runs out of ammunition.)

Finally, the entire legislation could be replaced by legislation banning war behavior in general, and it would be equally effective.

Exactly

A lot of the concerns here amount to "smart enemies could fool dumb robots into doing awful things", or "generals could easily instruct robots to do awful things" ... but a few amount to "robots can't tell if they are doing awful things or not, because they have no sense of 'awful'. The outcomes that human warriors want from making war are not only military victory, but also not too much awfulness in achieving it; therefore, robots are defective warriors."

A passage closely relevant to a number of LW-ish ideas:

An even more serious problem is that fully autonomous weapons would not possess human qualities necessary to assess an individual’s intentions, an assessment that is key to distinguishing targets. According to philosopher Marcello Guarini and computer scientist Paul Bello, “[i]n a context where we cannot assume that everyone present is a combatant, then we have to figure out who is a combatant and who is not. This frequently requires the attribution of intention.” One way to determine intention is to understand an individual’s emotional state, something that can only be done if the soldier has emotions. Guarini and Bello continue, “A system without emotion … could not predict the emotions or action of others based on its own states because it has no emotional states.” Roboticist Noel Sharkey echoes this argument: “Humans understand one another in a way that machines cannot. Cues can be very subtle, and there are an infinite number of circumstances where lethal force is inappropriate.” For example, a frightened mother may run after her two children and yell at them to stop playing with toy guns near a soldier. A human soldier could identify with the mother’s fear and the children’s game and thus recognize their intentions as harmless, while a fully autonomous weapon might see only a person running toward it and two armed individuals. The former would hold fire, and the latter might launch an attack. Technological fixes could not give fully autonomous weapons the ability to relate to and understand humans that is needed to pick up on such cues.

[-]Emile100

Guarini and Bello continue, “A system without emotion … could not predict the emotions or action of others based on its own states because it has no emotional states.

(source, it's from page 138 of Robot Ethics)

.... so it could predict them using another system! To do arithmetics, Humans use their fingers, or memorize multiplications tables, but a computer doesn't need either of those. I don't see why it would need emotions to predict emotions either.

As a side note, we are getting better at software recognition of emotions.

Similarly, there was that time US soldiers fired on a camera crew, even laughing at them for being incompetent terrorists when they ran around. Or they just capture and torture them with a poor explanation.

http://www.reuters.com/article/2010/04/06/us-iraq-usa-journalists-idUSTRE6344FW20100406

http://www.guardian.co.uk/media/2004/jan/13/usnews.iraq

human soldier could identify with the mother’s fear and the children’s game and thus recognize their intentions as harmles

It's unlikely that a human would actually do this. They would have been trained to react quickly and to not take risk / chances with the lives of their fellow soldiers. Reports from Cold War era wars support this view. The ( possibly unreliable ) local reports in current war torn regions also support humans not making these distinctions, or at least not making them when things are happening quickly.

You wouldn't use fully autonomous weapons systems in that situation, for the same reason that you wouldn't be using air-burst flechettes. It's not the right tool to do what you intend to do.

“[i]n a context where we cannot assume that everyone present is a combatant, then we have to figure out who is a combatant and who is not.

Funny, our enemies don't seem to have that problem. :P

If you are referring to terrorists, they generally claim that democracy or whatever makes us all complicit, IIRC.

It's not just terrorists. Stalin, Saddam Hussein, Genghis Khan, and many of the ancient Romans didn't have that problem either. In the olden days, you didn't bother trying to tell insurgents and civilians apart; you just massacred the population until there wasn't anyone left who was willing to fight you.

Telling the difference between combatants and non-combatants only matters if you care whether or not non-combatants are killed.

Well, it persuaded them ...

It requires some confusion as to the purpose of refraining from killing civilians, I think.

[-][anonymous]50

I wanted to paste all four of the Human Rights Watch's recommendations here explicitly from Page 10 of their report, since that seemed like a helpful thing to have for reference and debate:

Prohibit the development, production, and use of fully autonomous weapons through an international legally binding instrument.

Adopt national laws and policies to prohibit the development, production, and use of fully autonomous weapons.

Commence reviews of technologies and components that could lead to fully autonomous weapons. These reviews should take place at the very beginning of the development process and continue throughout the development and testing phases.

To Roboticists and Others Involved in the Development of Robotic Weapons: Establish a professional code of conduct governing the research and development of autonomous robotic weapons, especially those capable of becoming fully autonomous, in order to ensure that legal and ethical concerns about their use in armed conflict are adequately considered at all stages of technological development.

I suspect people are mostly just scared of killer robots because they're new. At least, they're scared of the new ones. Land mines are generally considered a problem largely because they stay there after the war is over, not because they kill indiscriminately during the war, though that seems to be a problem too. I haven't heard of naval mines being a problem at all

I haven't heard of naval mines being a problem at all

Naval mines aren't intended to kill by surprise, but to deny access to a shipping lane, shut down a harbor, or the like — nations declare when they are mining a stretch of water.

Also, kids don't run around in deep-water harbors and shipping lanes the way they do in fields that might be land-mined. Access to places that might have naval mines in them is largely restricted to vessels with adult, professional crew.

The report doesn't even mention one of the largest long-run problems, if autonomous robotic weapons proliferate: rebellion, or rather the lack of it. Numerous dictators and generals have found themselves looking at the wrong end of their soldiers' weapons, because orders to slaughter their own populace (for example) were unacceptable to those who would have to carry them out. That might end, if robotic soldiers come to hold more firepower than flesh and blood ones.

It's possible in principle, of course, to build in safeguards, and safeguards for the safeguards (safeguards against reprogramming). But I'm not at all confident that sufficient consideration of the problem will be given.