Trying to summarize here:
The open letter says: "If we allow autonomous weapons, a global arms race will make them much cheaper and much more easily available to terrorists, dictators etc. We want to prevent this, so we propose to outlaw autonomous weapons."
The author of the article argues, that the technology gets developed either way and will be cheaply available, and then continues to say, that autonomous weapons would reduce casualties in war.
I suspect that most people agree, that (if used ethically) autonomous weapons reduce casualties. The actual question is, how much (more) damage can someone without qualms about ethics do with autonomous weapons, and can we implement policies to minimize the availability of autonomous weapons to people we don't want to have them.
I think the main problem with this whole discussion was already mentioned elsewhere: Robotics and AI experts aren't experts on politics, and don't know what the actual effects of an autonomous weapon ban would be.
I suspect that most people agree, that (if used ethically) autonomous weapons reduce casualties.
What does "if used ethically" mean?
This is a bit like the debate around tasers. Taser seem like a good idea because they allow policeman to use less force. In reality in nearly every case where a policeman wanted to use a real gun in the past they still use a real gun. The do additional shots with the tasers.
...The actual question is, how much (more) damage can someone without qualms about ethics do with autonomous weapons, and can we implement poli
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.