There are a lot of explanations of consequentialism and utilitarianism out there, but not a lot of persuasive essays trying to convert people. I would like to fill that gap with a pro-consequentialist FAQ. The target audience is people who are intelligent but may not have a strong philosophy background or have thought about this matter too much before (ie it's not intended to solve every single problem or be up to the usual standards of discussion on LW).
I have a draft up at http://www.raikoth.net/consequentialism.html (yes, I have since realized the background is horrible, and changing it is on my list of things to do). Feedback would be appreciated, especially from non-consequentialists and non-philosophers since they're the target audience.
Each of these issues could be the subject of a separate lengthy discussion, but I'll try to address them as succinctly as possible:
Re: phlogiston. Yes, Eliezer's account is inaccurate, though it seems like you have inadvertently made even more out of it. Generally, one recurring problem in the writings of EY (and various other LW contributors) is that they're often too quick to proclaim various beliefs and actions as silly and irrational, without adequate fact-checking and analysis.
Re: interpersonal utility aggregation/comparison. I don't think you can handwave this away -- it's a fundamental issue on which everything hinges. For comparison, imagine someone saying that your consequentialism is wrong because it's contrary to God's commands, and when you ask how we know that God exists and what his commands are, they handwave it by saying that theologians have some ideas on how to answer these questions. In fact, your appeal to authority is worse in an important sense, since people are well aware that theologians are in disagreement on these issues and have nothing like definite unbiased answers backed by evidence, whereas your answer will leave many people thinking falsely that it's a well-understood issue where experts can provide adequate answers.
Re: economists and statisticians. Yes, nowadays it's hard to deny that central planning was a disaster after it crumbled spectacularly everywhere, but read what they were saying before that. Academics are just humans, and if an ideology says that the world is a chaotic inefficient mess and experts like them should be put in charge instead, well, it will be hard for them to resist its pull. Nowadays this folly is finally buried, but a myriad other ones along similar lines are actively being pursued, whose only redeeming value is that they are not as destructive in the short to medium run. (They still make the world uglier and more dysfunctional, and life more joyless and burdensome, in countless ways.) Generally, the idea that you can put experts in charge and expect that they their standards of expertise won't be superseded by considerations of power and status is naively utopian.
Re: procedures in place for violating heuristics. My problem is not with the lack of elegant philosophical rules. On the contrary, my objections are purely practical. The world is complicated and the law of unintended consequences is merciless and unforgiving. What's more, humans are scarily good at coming up with seemingly airtight arguments that are in fact pure rationalizations or expressions of intellectual vanity. So, yes, the heuristics must be violated sometimes when the stakes are high enough, but given these realistic limitations, I think you're way overestimating our ability to identify such situations reliably and the prudence of doing so when the stakes are less than enormous.
Re: Section 7. Basically, you don't take the least convenient possible world into account. In this case, the LCPW is considering the most awful thing imaginable, assuming that enough people assign it positive enough value that the scales tip in their favor, and then giving a clear answer whether you bite the bullet. Anything less is skirting around the real problem.
Re: welfare of some more than others. I'm confused by your position: are you actually biting the bullet that caring about some people more than others is immoral? I don't understand why you think it's weird to ask such a question, since utility maximization is at least prima facie in conflict with both egoism and any sort of preferential altruism, both of which are fundamental to human nature, so it's unclear how you can resolve this essential problem. In any case, this issue is important and fundamental enough that it definitely should be addressed in your FAQ.
Re: game theory and the thought process. The trouble is that consequentialism, or at least your approach to it, encourages thought processes leading to reckless action based on seemingly sophisticated and logical, but in reality sorely inadequate models and arguments. For example, the idea that you can assess the real-world issue of mass immigration with spherical-cow models like the one to which you link approvingly is every bit as delusional as the idea -- formerly as popular among economists as models like this one are nowadays -- that you can use their sophisticated models to plan the economy centrally with results far superior to those nasty and messy markets.
General summary: I think your FAQ should at the very least include some discussion of (2) and (6), since these are absolutely fundamental problems. Also, I think you should research more thoroughly the concrete examples you use. If you've taken the time to write this FAQ, surely you don't want people dismissing it because parts of it are inaccurate, even if this isn't relevant to the main point you're making.
Regarding the other issues, most of them revolve around the general issues of practical applicability of consequentialist ideas, the law of unintended consequences (of which game-theoretic complications are just one special case), the reliability of experts when they are in positions where their ideas matter in terms of power, status, and wealth, etc. However you choose to deal with them, I think that even in the most basic discussion of this topic, they deserve more concern than your present FAQ gives them.
Okay, thank you.
I will replace the phlogiston section with something else, maybe along the lines of the example of a medicine putting someone to sleep because it has a "dormitive potency".
I agree with you that there are lots of complex and messy calculations that stand between consequentialism and correct results, and that at best these are difficult and at worst they are not humanly feasible. However, this idea seems to me fundamentally consequentialist - to make this objection, one starts by assuming consequentialist principles, but then saying... (read more)