humans can make better ethical decisions than robots currently can.
This is not obvious. Many's the innocent who has been killed by some tense soldier with his finger on the trigger of a loaded weapon, who didn't make an ethical decision at all. He just reacted to movement in the corner of his eye. If there was an ethical decision made, it was not at the point of killing, but at the point of deploying the soldier, with that armament and training, to that area - and this decision will not be made by the robots themselves, for some time to come.
If you don't like machine guns, how about minefields? The difference between a killer robot and a minefield seems pretty minuscule to me; one moves around, the other doesn't.
Your mistake is in identifying pulling the trigger as the ethically important moment.
Peter at the Conscious Entities blog wrote an essay on the problems with using autonomous robots for combat, and attempts to articulate some general principles which allow them to be used ethically. He says:
Unpacking this a little, autonomous robots will affect the characteristics of war and make it easier for many to carry out, can be expected to malfunction in especially complex and open-ended situations in very serious ways, might be re-purposed for crime, and because for various reasons they make the ethics surrounding war even more dubious.
He even takes a stab at laying out restrictive principles which will help mitigate some of the danger in utilizing autonomous robots:
Though he is a non-expert in the field, I (also a non-expert) find his analysis capable and thorough, though I spotted some possible flaws. I mention it here at LessWrong because, while we may be decades away from superintelligent AI, work in AI risk and machine ethics is going to become especially important very soon as drones, robots, and other non-human combatants become more prevalent on battlefields all over the world.
Switching gears a bit, Massimo Pigliucci of Rationally Speaking fame lays out some common theories of truth and problems facing each one. If you've never heard of Charles Sanders Pierce and wouldn't know a verificationist account of truth if it hit you in the face, Massimo's article could be a good place to start getting some familiarity. It seems relevant because there has been some work on epistemology in these parts recently. And, as Massimo says:
This matters for anyone who wants to know how things are, but is even more urgent for one who would create a truth-seeking artificial mind.