fubarobfusco comments on Are Deontological Moral Judgments Rationalizations? - Less Wrong

37 Post author: lukeprog 16 August 2011 04:40PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (168)

You are viewing a single comment's thread. Show more comments above.

Comment author: fubarobfusco 17 August 2011 06:00:53AM 14 points [-]

I've sometimes thought of deontological rules as something like a sanity check on utilitarian reasoning.

If, as you are reasoning your way to maximum utility, you come up with a result that ends, "... therefore, I should kill a lot of innocent people," or for that matter "... therefore, I'm justified in scamming people out of their life savings to get the resources I need," the role of deontological rules against murder or cheating is to make you at least stop and think about it really hard. And, almost certainly, find a hole in your reasoning.

It is imaginable — I wouldn't say likely — that there are "universal moral laws" for human beings, which take the following form: "If you come to the conclusion 'Utility is maximized if I murder these innocent people', then it is more likely that your human brain has glitched and failed to reason correctly, than that your conclusion is correct." In other words, the probability of a positive-utility outcome from murder is less than the probability of erroneous reasoning leading to the belief in that outcome.

A consequence of this is that the better predictor you are, the more things can be moral for you to do if you conclude they maximize utility. It is imaginable that no human can with <50% probability of error arrive at the conclusion "I should push that fat guy in front of the trolley", but that some superhuman predictor could.

Comment author: Eugine_Nier 17 August 2011 07:03:59AM 5 points [-]

It is imaginable — I wouldn't say likely — that there are "universal moral laws" for human beings, which take the following form: "If you come to the conclusion 'Utility is maximized if I murder these innocent people', then it is more likely that your human brain has glitched and failed to reason correctly, than that your conclusion is correct." In other words, the probability of a positive-utility outcome from murder is less than the probability of erroneous reasoning leading to the belief in that outcome.

Obligatory link to relevant sequence.