This rather serious report should be of interest to LW. It argues that autonomous robotic weapons which can kill a human without an explicit command from a human operator ("Human-Out-Of-The-Loop" weapons) should be banned, at an international level.
(http://www.hrw.org/reports/2012/11/19/losing-humanity-0)
And a bullet is out of human control once you've fired it. Where do you draw the line?
Probably the most important feature is the extent to which the human activator can predict the actions of the potentially-robotic weapon.
In the case of a gun, you probably know where the bullet will go, and if you don't, then you probably shouldn't fire it.
In the case of an autonomous robot, you have no clue what it will do in specific situations, and requiring that you don't activate it when you can't predict it means you won't activate it at all.