There are a lot of explanations of consequentialism and utilitarianism out there, but not a lot of persuasive essays trying to convert people. I would like to fill that gap with a pro-consequentialist FAQ. The target audience is people who are intelligent but may not have a strong philosophy background or have thought about this matter too much before (ie it's not intended to solve every single problem or be up to the usual standards of discussion on LW).
I have a draft up at http://www.raikoth.net/consequentialism.html (yes, I have since realized the background is horrible, and changing it is on my list of things to do). Feedback would be appreciated, especially from non-consequentialists and non-philosophers since they're the target audience.
Some criticism that I hope you will find useful:
First of all, you should mention timeless decision theory, or at least superrationality. These concepts are useful for explaining why people's intuition that they should not steal is not horribly misguided even if the thief cares about himself more and/or would needs it more than the previous owner. You touched on this by pointing out that the economy would collapse if everyone stole all the time, but I would suggest being more explicit.
I take issue with this simply because it is not even remotely similar to the way anyone acts. I'd prefer it if we could just admit that we cared more about ourselves than about other people. Sure, utilitarianism says that the right thing to do would be to act like everyone, including oneself, is of equal value, and the world would be a better place if people actually acted this way. But no one does, and endorsing utilitarianism does not usually get them closer.
Then I would suggest either doing the research or not mentioning it, since this is not critical to the concept of consequentialism. I'm not entirely clear on it either.
But what if he doesn't? You are right that this situation is a problem for for simple preference utilitarianism that can be rectified by some other form of utilitarianism, but your suggested solution leads to a slippery slope towards justifying anything you want with CEV utilitarianism by claiming that everyone else's moral preferences would be exactly what you want them to be in their CEV. I think the real issue here is that we respect some forms of preferences much more than others. Recall that pleasure utilitarianism (which would be the extreme case of giving 0 weight to all but one form of preference) gives the answer we like in this case.
Very strongly disagree, and not just because I'm sceptical about both. The article is supposed about consequentialism,... (read more)