This rather serious report should be of interest to LW. It argues that autonomous robotic weapons which can kill a human without an explicit command from a human operator ("Human-Out-Of-The-Loop" weapons) should be banned, at an international level.
(http://www.hrw.org/reports/2012/11/19/losing-humanity-0)
A lot of the concerns here amount to "smart enemies could fool dumb robots into doing awful things", or "generals could easily instruct robots to do awful things" ... but a few amount to "robots can't tell if they are doing awful things or not, because they have no sense of 'awful'. The outcomes that human warriors want from making war are not only military victory, but also not too much awfulness in achieving it; therefore, robots are defective warriors."
A passage closely relevant to a number of LW-ish ideas:
It's unlikely that a human would actually do this. They would have been trained to react quickly and to not take risk / chances with the lives of their fellow soldiers. Reports from Cold War era wars support this view. The ( possibly unreliable ) local reports in current war torn regions also support humans not making these distinctions, or at least not making them when things are happening quickly.