CronoDAS comments on Consequentialism FAQ - Less Wrong

20 Post author: Yvain 26 April 2011 01:45AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (117)

You are viewing a single comment's thread. Show more comments above.

Comment author: AlexMennen 26 April 2011 03:56:23AM 7 points [-]

Some criticism that I hope you will find useful:

First of all, you should mention timeless decision theory, or at least superrationality. These concepts are useful for explaining why people's intuition that they should not steal is not horribly misguided even if the thief cares about himself more and/or would needs it more than the previous owner. You touched on this by pointing out that the economy would collapse if everyone stole all the time, but I would suggest being more explicit.

(3.8) I think the best course of action would be to assign equal value to yourself and other people, which seems nicely in accord with there being no objective reason for a moral difference between you.

I take issue with this simply because it is not even remotely similar to the way anyone acts. I'd prefer it if we could just admit that we cared more about ourselves than about other people. Sure, utilitarianism says that the right thing to do would be to act like everyone, including oneself, is of equal value, and the world would be a better place if people actually acted this way. But no one does, and endorsing utilitarianism does not usually get them closer.

(5.31) Desire utilitarianism replaces preferences with desire. The differences are pretty technical and I don't understand all of them, but desire utilitarians sure seem to think their system is better.

Then I would suggest either doing the research or not mentioning it, since this is not critical to the concept of consequentialism. I'm not entirely clear on it either.

(7.4) For example, in coherent extrapolated volition utilitarianism, instead of respecting a specific racist's current preference, we would abstract out the reflective equilibrium of that racist's preferences if ey was well-informed and in philosophical balance. Presumably, at that point ey would no longer be a racist.

But what if he doesn't? You are right that this situation is a problem for for simple preference utilitarianism that can be rectified by some other form of utilitarianism, but your suggested solution leads to a slippery slope towards justifying anything you want with CEV utilitarianism by claiming that everyone else's moral preferences would be exactly what you want them to be in their CEV. I think the real issue here is that we respect some forms of preferences much more than others. Recall that pleasure utilitarianism (which would be the extreme case of giving 0 weight to all but one form of preference) gives the answer we like in this case.

Comment author: CronoDAS 26 April 2011 05:48:24AM *  3 points [-]

(5.31) Desire utilitarianism replaces preferences with desire. The differences are pretty technical and I don't understand all of them, but desire utilitarians sure seem to think their system is better.

Then I would suggest either doing the research or not mentioning it, since this is not critical to the concept of consequentialism. I'm not entirely clear on it either.

Desire utilitarianism doesn't replace preferences with desires, it replaces actions with desires. It's not a consequentialist system; it's actually a type of virtue ethics. When confronted with the "fat man" trolley problem, it concludes that there are good agents that would push the fat man and other good agents that wouldn't. You should probably avoid mentioning it.

Comment author: Yvain 04 May 2011 01:58:40PM *  0 points [-]

Thank you. That makes more sense than the last explanation of it I read.