The trolley problem
In 2009, a pair of computer scientists published a paper enabling computers to behave like humans on the trolley problem (PDF here). They developed a logic that a computer could use to justify not pushing one person onto the tracks in order to save five other people. They described this feat as showing "how moral decisions can be drawn computationally by using prospective logic programs."
I would describe it as devoting a lot of time and effort to cripple a reasoning system by encoding human irrationality into its logic.
Which view is correct?
Dust specks
Eliezer argued that we should prefer 1 person being tortured for 50 years over 3^^^3 people each once getting a barely-noticeable dust speck in their eyes. Most people choose the many dust specks over the torture. Some people argued that "human values" includes having a utility aggregation function that rounds tiny (absolute value) utilities to zero, thus giving the "dust specks" answer. No, Eliezer said; this was an error in human reasoning. Is it an error, or a value?
Sex vs. punishment
In Crime and punishment, I argued that people want to punish criminals, even if there is a painless, less-costly way to prevent crime. This means that people value punishing criminals. This value may have evolved to accomplish the social goal of reducing crime. Most readers agreed that, since we can deduce this underlying reason, and accomplish it more effectively through reasoning, preferring to punish criminals is an error in judgement.
Most people want to have sex. This value evolved to accomplish the goal of reproducing. Since we can deduce this underlying reason, and accomplish it more efficiently than by going out to bars every evening for ten years, is this desire for sex an error in judgement that we should erase?
The problem for Friendly AI
Until you come up with a procedure for determining, in general, when something is a value and when it is an error, there is no point in trying to design artificial intelligences that encode human "values".
(P.S. - I think that necessary, but not sufficient, preconditions for developing such a procedure, are to agree that only utilitarian ethics are valid, and to agree on an aggregation function.)
The principle of double effect is interesting:
The distinction to me looks something the difference between
"Take action -> one dies, five live" and "Kill one -> five live"
Where the salient difference is whether the the act is morally permissible on its own. So a morally neutral act like flipping a switch allows the person to calculate the moral worth of one life vs five lives, but a morally wrong action like pushing a man in front of a trolley somehow screens off that moral calculation for most people.
I don't put much stock in the "unconsciously convinced of our own fallability" argument, as thakil edit: and rwallace presented below - I actually feel this is a case of our social preservation instincts overriding our biological/genetic/species preservation instincts. That is, murdering someone is so socially inexcusable that we have evolved to instinctively avoid murdering people - or doing anything that is close enough to count as murder in the eyes of our tribe.
And when a variation of the trolley problem is presented which triggers our "this is going to look like murder" instinct, we try to alter the calculation's outcome¹ or reject the calculation entirely².
¹ I have noticed that people only present mitigating circumstances ("pushing the fat man might not work", "I might not be able to physically push the fat man, especially if he resists", "the fat man might push me", and so on) when the situation feels impermissible. They rarely bring up these problems in situations where it doesn't feel like murder.
² Sometimes by rejecting utilitarianism completely, a la anti-epistemology
So I think my position on this matter is that we have a procedure for determining when something is a value and a bug, it's called utilitarianism, and unfortunately the human brain has some crippling hardware flaws that cause the procedure to often fail to output the correct answer.
Is evolution fast enough to have evolved this instinct in the past 4000 years? IIRC, anthropologists have found murder was the most common cause of death for men in some primitive tribes. There can't have been a strong instinct against murder in tribal days, because people did it frequently.