There are a lot of explanations of consequentialism and utilitarianism out there, but not a lot of persuasive essays trying to convert people. I would like to fill that gap with a pro-consequentialist FAQ. The target audience is people who are intelligent but may not have a strong philosophy background or have thought about this matter too much before (ie it's not intended to solve every single problem or be up to the usual standards of discussion on LW).
I have a draft up at http://www.raikoth.net/consequentialism.html (yes, I have since realized the background is horrible, and changing it is on my list of things to do). Feedback would be appreciated, especially from non-consequentialists and non-philosophers since they're the target audience.
Okay, summary of things I've learned I should change from this feedback:
Fix dead links (I think OpenOffice is muddling open quotes and close quotes again)
Table of contents/navigation.
Stress REALLY REALLY REALLY HARD that this is meant to be an introduction and that there's much more stuff like game theory and decision theory that is necessary for a full understanding of utilitarianism.
Change phlogiston metaphor subsequent to response from Vladimir_M
Remove reference to Eliezer in "warm fuzzies" section.
Rephrase "We have procedures in place for violating heuristics" to mention that it's patchwork and we still don't have elegant rules for this sort of thing.
In part about using utilitarianism to set policy, stress involvement of clever tools like prediction markets to lessen the immediate reaction that I'm obviously flaming mad.
Rephrase gladiatorial games to be more clear.
Possibly change title to "Rarely Asked Questions" or "Never Asked Questions" or just keep the abbreviation the same but expand it to "Formatted Answers and Questions"
Remove reference to desire utilitarianism until I'm sure I understand it.
Change 'most consequentialists give similar results' to 'most popular consequentialisms...'
Change bit about axiology collapsing distinctions.
Consequentialism chooses best state of world ---> consequentialism chooses better world-state of two actions.
Add to part about fat man that although the example is correct insofar as it correctly teaches consequentialist reasoning, game/decision theory type considerations might change the actual correct answer.