Here is a small edit:
In essence I think there are four broad reasons why hypothetically we might think it right to be wary of industrial robots: first, because they work well; second because in other ways they don’t work well, third because they open up new scope for crime, and fourth because they might be inherently unethical.
Every argument for and against industrial robots applies to military robots, except industrial robots influence more people on an ongoing basis (redundancy through automation) while (aside from a Terminator future) military robots influence less people for shorter periods.
Military robots and industrial robots are both capable of going horribly wrong. However, military robots can also go horribly right. They are designed to cause large amounts of damage, which means that it's more likely for them to cause large amounts of damage in an inconvenient way. Industrial robots can, and occasionally do, cause large amounts of damage, but it's much less likely.
Also, the argument that military robots can commit atrocities that human soldiers would not has no analogue with industrial robots. Industry is a much less ethically gray area....
Peter at the Conscious Entities blog wrote an essay on the problems with using autonomous robots for combat, and attempts to articulate some general principles which allow them to be used ethically. He says:
Unpacking this a little, autonomous robots will affect the characteristics of war and make it easier for many to carry out, can be expected to malfunction in especially complex and open-ended situations in very serious ways, might be re-purposed for crime, and because for various reasons they make the ethics surrounding war even more dubious.
He even takes a stab at laying out restrictive principles which will help mitigate some of the danger in utilizing autonomous robots:
Though he is a non-expert in the field, I (also a non-expert) find his analysis capable and thorough, though I spotted some possible flaws. I mention it here at LessWrong because, while we may be decades away from superintelligent AI, work in AI risk and machine ethics is going to become especially important very soon as drones, robots, and other non-human combatants become more prevalent on battlefields all over the world.
Switching gears a bit, Massimo Pigliucci of Rationally Speaking fame lays out some common theories of truth and problems facing each one. If you've never heard of Charles Sanders Pierce and wouldn't know a verificationist account of truth if it hit you in the face, Massimo's article could be a good place to start getting some familiarity. It seems relevant because there has been some work on epistemology in these parts recently. And, as Massimo says:
This matters for anyone who wants to know how things are, but is even more urgent for one who would create a truth-seeking artificial mind.