VoiceOfRa comments on Open Thread, Jul. 27 - Aug 02, 2015 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (220)
See also the website of the (I think) most prominent pressure group in this area: Campaign to Stop Killer Robots.
This came up at the AI Ethics panel at AAAI, and the "outlaws" argument actually seems like a fairly weak practical counterargument in the reference class that the ban proponents think is relevant. International agreements really have reduced to near-zero the usage of chemical warfare and landmines.
The two qualifiers--offensive and autonomous-- are also both material. If we have anti-rocket flechettes on a tank, it's just not possible to have a human in the loop, because you need to launch them immediately after you detect an incoming rocket, so defensive autonomous weapons are in. Similarly, offensive AI is in; your rifle / drone / etc. can identify targets and aim for you, but the ban is arguing that there needs to be a person that verifies the targeting system is correct and presses the button (to allow the weapon to fire; it can probably decide the timing). The phrase they use is "meaningful human control."
The idea, I think, is that everyone is safer if nation-states aren't developing autonomous killbots to fight other nation's autonomous killbots. So long as they're more like human-piloted mechs, there are slightly fewer nightmare scenarios involving mad engineers and hackers.
The trouble I had with it was the underlying principle of "meaningful human control" is an argument I do not buy for livingry, and that makes me reluctant to buy it for weaponry, or to endorse weaponry bans that could then apply the same logic to livingry. It seems to me that they implicitly assume that a principle on 'life and death decisions' only affects weaponry, but not at all--one of the other AAAI attendees pointed out that in their donor organ allocation software, the fact that there was no human control was seen as a plus, because it implied that there was no opportunity for corruption of the people involved in making the decision, because those people did not exist. (Of course people were involved at a higher meta level, in writing the software and establishing the principles by which the software operates.)
And that's just planning; if we're going to have robot cars or doctors or pilots or so on, we need to accept robots making life and death decisions and relegate 'meaningful human control' to the places where it's helpful. And it seems like we might also want robot police and soldiers.
Disagree. It only seems that way because you are looking at too small a time scale. Every time a sufficiently powerful military breakthrough arrives there are attempts to ban it, or declare using it "dishonorable", or whatever the equivalent is. (Look up the papal bulls against crossbows and gunpowder sometime). This lasts a generation at most, generally until the next major war.
Consider chemical warfare in WWI vs. chemical warfare in WWII. I'm no military historian, but my impression is that it was used because it was effective, people realized that it was lose-lose relative to not using chemical warfare, and then it wasn't used in WWII, because both sides reasonably expected that if they started using it, then the other side would as well.
One possibility is that this only works for technologies that are helpful but not transformative. An international campaign to halt the use of guns in warfare would not get very far (like you point out), and it is possible that autonomous military AI is closer to guns than it is chemical warfare.
Chemical warfare was only effective the first couple times it was used, i.e., before people invented the gas mask.
Combat efficiency is much reduced when using gas mask.
Moreover, while gas masks for horses do (did) exist, good luck persuading your horse wearing it. And horses were rather crucial in WWI and still very important in WWII.
We did not see gas used during WWII mostly because of Hitler's aversion and a (mistaken) belief that the Allies had stockpiles of nerve agents and Germany feared their retaliation.
My impression is that chemical weapons were very effective in the Iran-Iraq war (e.g.), despite the gas mask having been invented.
The norm against using nuclear weapons in war is arguably a counterexample, though that depends on precisely how one operationalizes "there are attempts to ban it, or declare using it "dishonorable", or whatever the equivalent is".