There are a lot of explanations of consequentialism and utilitarianism out there, but not a lot of persuasive essays trying to convert people. I would like to fill that gap with a pro-consequentialist FAQ. The target audience is people who are intelligent but may not have a strong philosophy background or have thought about this matter too much before (ie it's not intended to solve every single problem or be up to the usual standards of discussion on LW).

I have a draft up at http://www.raikoth.net/consequentialism.html (yes, I have since realized the background is horrible, and changing it is on my list of things to do). Feedback would be appreciated, especially from non-consequentialists and non-philosophers since they're the target audience.

New Comment
124 comments, sorted by Click to highlight new comments since: Today at 2:12 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

OK, I've read the whole FAQ. Clearly, a really detailed critique would have to be given at similar length. Therefore, here is just a sketch of the problems I see with your exposition.

For start, you use several invalid examples, or at least controversial examples that you incorrectly present as clear-cut. For example, the phlogiston theory was nothing like the silly strawman you present. It was a falsifiable scientific theory that was abandoned because it was eventually falsified (when it was discovered that burning stuff adds mass due to oxidation, rather than losing mass due to escaped phlogiston). It was certainly a reductionist theory -- it attempted to reduce fire (which itself has different manifestations) and the human and animal metabolism to the same underlying physical process. (Google "Becher-Stahl theory".) Or, at another place, you present the issue of "opposing condoms" as a clear-cut case of "a horrendous decision" from a consequentialist perspective -- although in reality the question is far less clear.

Otherwise, up to Section 4, your argumentation is passable. But then it goes completely off the rails. I'll list just a few main issues... (read more)

9Scott Alexander13y
Phlogiston: my only knowledge of the theory is Eliezer's posts on it. Do Eliezer's posts make the same mistake, or am I misunderstanding those posts? Trolley-problem: Agreed about Schelling points of interactions between people. What I am trying to do is not make a case for pushing people in hypothetical trolley problems, but to show that certain arguments against doing so are wrong. I think I returned to some of the complicating factors later on, although I didn't go quite so deep as to mention Schelling points by name. I'll look through it again and make sure I've covered that to at least the low level that would be expected in an introductory argument like this. Aggregating interpersonal utilities: Admitted that I handwave this away by saying "Economists have some ideas on how to do this". The FAQ was never meant to get technical, only provide an introduction to the subject. Because it is already 25 pages long I don't want to go that deep, although I should definitely make it much clearer that these topics exist. Procedures in place for violating heuristics: By this I mean that we have laws that sometimes supervene certain rights. For one example, even though we have a right to free speech, we also have a law against hate speech. Even though we have a right to property, we also have laws of eminent domain when one piece of property is blocking construction of a railway or something. Would it be proper to rephrase your objection as "We don't have a single elegant philosophical rule for deciding when it is or isn't okay to violate heuristics"? Parties pointing out natural rights are at stake: In a deontological system, these conflicts are not solveable even in principle: we simply don't know how to decide between two different rights and the only hope is to refer it to politicians or the electorate or philosophers. In a consequentialist system it's certainly possible to disagree, and clever arguers can come up with models in their favor, but it's possible to de

Each of these issues could be the subject of a separate lengthy discussion, but I'll try to address them as succinctly as possible:

  1. Re: phlogiston. Yes, Eliezer's account is inaccurate, though it seems like you have inadvertently made even more out of it. Generally, one recurring problem in the writings of EY (and various other LW contributors) is that they're often too quick to proclaim various beliefs and actions as silly and irrational, without adequate fact-checking and analysis.

  2. Re: interpersonal utility aggregation/comparison. I don't think you can handwave this away -- it's a fundamental issue on which everything hinges. For comparison, imagine someone saying that your consequentialism is wrong because it's contrary to God's commands, and when you ask how we know that God exists and what his commands are, they handwave it by saying that theologians have some ideas on how to answer these questions. In fact, your appeal to authority is worse in an important sense, since people are well aware that theologians are in disagreement on these issues and have nothing like definite unbiased answers backed by evidence, whereas your answer will leave many people thinking falsely that

... (read more)
5Scott Alexander13y
Okay, thank you. I will replace the phlogiston section with something else, maybe along the lines of the example of a medicine putting someone to sleep because it has a "dormitive potency". I agree with you that there are lots of complex and messy calculations that stand between consequentialism and correct results, and that at best these are difficult and at worst they are not humanly feasible. However, this idea seems to me fundamentally consequentialist - to make this objection, one starts by assuming consequentialist principles, but then saying they can't be put into action and so we should retreat from pure consequentialism on consequentialist grounds. The target audience of this FAQ is people who are not even at this level yet - people who don't even understand that you need to argue against certain "consequentialist" ideas on consequentialist grounds, but rather that they can be dismissed by definition because consequences don't matter. Someone who accepts consequentialism on a base level but then retreats from it on a higher level is already better informed than the people I am aiming this FAQ at. I will make this clearer. This gets into the political side of things as well. I still don't understand why you think consequentialism implies or even suggests centralized economic planning when we both agree centralized economic planning would have bad consequences. Certain decisions have to be made, and making them on consequentialist grounds will produce the best results - even if those consequentialist grounds are "never give the government the power to make these decisions because they will screw them up and that will have bad consequences". I continue to think prediction markets allow something slightly more interesting than that, and I think if you disagree we can resolve that disagreement only on consequentialist grounds - eg would a government that tried to intervene where prediction markets recommended intervention create better consequences than one t

However, this idea seems to me fundamentally consequentialist - to make this objection, one starts by assuming consequentialist principles, but then saying they can't be put into action and so we should retreat from pure consequentialism on consequentialist grounds.

Fair enough. Though I can grant this only for consequentialism in general, not utilitarianism -- unless you have a solution to the fundamental problem of interpersonal utility comparison and aggregation. (In which case I'd be extremely curious to hear it.)

I still don't understand why you think consequentialism implies or even suggests centralized economic planning when we both agree centralized economic planning would have bad consequences.

I gave it as a historical example of a once wildly popular bad idea that was a product of consequentialist thinking. Of course, as you point out, that was an instance of flawed consequentialist thinking, since the consequences were in fact awful. The problem however is that these same patterns of thinking are by no means dead and gone -- it is only that some of their particular instances have been so decisively discredited in practice that nobody serious supports them any more... (read more)

0sark13y
I unfortunately don't get the main point :( Could you elaborate on or at least provide a reference for how a consideration of Schelling points would suggest that we shouldn't push the fat man?

This essay by David Friedman is probably the best treatment of the subject of Schelling points in human relations:
http://www.daviddfriedman.com/Academic/Property/Property.html

Applying these insights to the fat man/trolley problem, we see that the horrible thing about pushing the man is that it transgresses the gravest and most terrible Schelling point of all: the one that defines unprovoked deadly assault, whose violation is understood to give the other party the licence to kill the violator in self-defense. Normally, humans see such crucial Schelling points as sacrosanct. They are considered violable, if at all, only if the consequentialist scales are loaded to a far more extreme degree than in the common trolley problem formulations. Even in the latter case, the act will likely cause serious psychological damage. This is probably an artifact of additional commitment not to violate them, which may also be a safeguard against rationalizations.

Now, the utilitarian may reply that this is just human bias, an unfortunate artifact of evolutionary psychology, and we’d all be better off if people instead made decisions according to pure utilitarian calculus. However, even ignoring all th... (read more)

2utilitymonster13y
Can you explain why this analysis renders directing away from the five and toward the one permissible?

The switch example is more difficult to analyze in terms of the intuitions it evokes. I would guess that the principle of double effect captures an important aspect of what's going on, though I'm not sure how exactly. I don't claim to have anything close to a complete theory of human moral intuitions.

In any case, the fact that someone who flipped the switch appears much less (if at all) bad compared to someone who pushed the fat man does suggest strongly that there is some important game-theoretic issue involved, or otherwise we probably wouldn't have evolved such an intuition (either culturally or genetically). In my view, this should be the starting point for studying these problems, with humble recognition that we are still largely ignorant about how humans actually manage to cooperate and coordinate their actions, instead of naive scoffing at how supposedly innumerate and inconsistent our intuitions are.

0sark13y
Thanks! That makes sense.

I like it, but stop using "ey". For god's sake, just use "they".

I reluctantly agree. I like Spivac pronouns, but I since most people haven't even heard of them, using them probably makes your FAQ less effective for most people.

6ArisKatsaris13y
Seconded. I strongly dislike Spivac pronouns. Use "they".
3Scott Alexander13y
I agree that "ey" is annoying and distracting, but I feel like someone's got to be an early adopter or else it will never stop being annoying and distracting.

I know where you're coming from, but "they" is already the world's gender-neutral third person pronoun of choice, so why pick a different one? Even if it wasn't, you've got to pick your battles.

3Emile13y
Note that these first show up in the section on signaling. Later on, there's a criticism of Deontology (using Rules as the final arbiter of what's right), by appealing to Rules: Still later on: Hmm.

Some "first impressions" feedback: though it has a long tradition in geekdom, and occasionally works well, I for one find the fake-FAQ format extremely offputting. These aren't "frequently" asked questions - they are your questions, and not so much questions as lead-ins to various parts of an essay.

I'd be more interested if you started off with a concise definition of what you mean by "consequentialism". (If you were truly writing a FAQ, that would be the most frequently asked question of all). As it is, the essay starts losing me by the time it gets to "part one" - I skipped ahead to see how long I should expect to spend on preliminaries before getting to the heart of the matter.

(Usually, when you feel obliged to make a self-deprecating excuse such as "Sorry, I fell asleep several pages back" in your own writing, there's a structure issue that you want to correct urgently.)

9NihilCredo13y
Myself, I found the fake-FAQ format to work pretty well, since it's a relatively faithful recreation of Internet debates on morality/politics/whatever.
6[anonymous]13y
I think the fake-FAQ format is good when you can use it to skip to the interesting things. I wouldn't read an essay but if I could just read two answers that interest me, I might read the rest too. This being said, in the Cons-FAQ a lot of questions refer to previous questions which of course completely destroys this advantage.
1sark13y
Fake-FAQs can be a method of misrepresenting arguments against your viewpoint. Like: "Check out all these silly arguments anti-consequentialists frequently use". Just an example, I'm not saying Yvain is doing this.
1kpreid13y
I don't care whether it's in the format of a FAQ, but don't call it a FAQ if the questions are not frequently asked.
4[anonymous]13y
I've long had the suspicion that many FAQs aren't really frequently asked.
0endoself13y
I have only read his anti-libertarian FAQ but the concerns mentioned in the questions did seem to be typical of those that would be asked.

I think your analysis of the 'gladiatorial objection' misses something:

I hope I'm not caricaturing you too much if I condense your rebuttal as follows: "People wouldn't really enjoy watching gladiators fight to the death. In fact, they'd be sickened and outraged. Therefore, utilitarianism does not endorse gladiatorial games after all."

But there's a problem here: If the negative reaction to gladiatorial games is itself partly due to analyzing those games in utilitarian terms then we have a feedback loop.

Games are outrageous --> decrease utility --> are outrageous --> etc.

But this could just as well be a 'virtuous circle':

Games are noble --> increase utility --> are noble --> etc.

If we started off with a society like that of ancient Rome, couldn't it be that the existence of gladiatorial games is just as 'stable' (with respect to the utilitarian calculus) as their non-existence in our own society?

Couldn't it be that we came to regard such bloodsports as being 'immoral' for independent, non-utilitarian reasons*? And then once this new moral zeitgeist became prevalent, utilitarians could come along and say "Aha! Far from being 'fun', just look at how m... (read more)

  • Sadly my knowledge of history is too meagre to venture an account of how this actually happened.

Well, we have Christianity to blame for the decline of gladiatorial games.

Incidentally, now that we know Christianity to be false and thus gladiatorial games were banned under false pretenses, does recursive consistency require us to re-examine whether they are a good idea?

7NihilCredo13y
I hear that there already are voluntary, secretive leagues of people fighting to the death, even though the sport is banned. I don't know whether most fighters are enthusiastic or desperate for cash, though. But considering that becoming a Formula One pilot was a common dream even when several pilot deaths per year were the rule, I wouldn't be surprised if it were the former.
2bogus13y
Notwithstanding NihilCredo's point, the lack of gladiatorial combat today is most likely due to a genuine change in taste, probably related to secular decline in social violence and availability of increasingly varied entertainment (movie theaters, TV, video games etc.). The popularity of blood sports in general is decreasing. We also know that folks used to entertain themselves in ways that would be unthinkable today, such as gathering scores of cats and burning them in a fire.
5Eugine_Nier13y
For gladiatorial games specifically, their decline was caused by Christian objections. Sorry, you don't get to redefine historical facts just because they don't fit your narrative. Wait, that sounds like fun.
0Alicorn13y
Can you shed any light on why, or what would be fun about it? This confuses me.
5steven046113y
It makes me suspicious when some phenomenon is claimed to be general, but in practice is always supported using the same example.
3Nornagest13y
There's no shortage of well-documented blood sports both before and during the Christian era. I know of few as shocking as bogus's example (which was, incidentally, new to me), but one that comes close might be the medieval French practice of players tying a cat to a tree, restraining their own hands, and proceeding to batter the animal to death with their heads. This was mentioned in Barbara Tuchman's A Distant Mirror; Google also turns up a reference here. I suppose there's something about cats that lends itself to shock value.
0AlexanderRM9y
I would say yes, we should re-examine it. The entertainment value of forced gladiatorial games on randomly-selected civilians... I personally would vote against them because I probably wouldn't watch them anyway, so it would be a clear loss for me. Still, for other people voting in favor of them... I'm having trouble coming up with a really full refutation of the idea in the Least Convenient Possible World hypothetical where there's no other way to provide gladiatorial games, but there are some obvious practical alternatives. It seems to me that voluntary gladiatorial games where the participants understand the risks and whatnot would be just fine to a consequentialst. It's especially obvious if you consider the case of poor people going into the games for money. There are plenty of people currently who die because of factors relating to lack of money. If we allowed such people to voluntarily enter gladiatorial games for money, then the gladiators would be quite clearly better off. If we ever enter a post-scarcity society but still have demand for gladiatorial games, then we can obviously ask for volunteers and get people who want the glory/social status/whatnot of it. If for some reason that source of volunteers dried up, yet we still have massive demand, then we can have everyone who wants to watch gladiatorial games sign up for a lottery in exchange for the right to watch them, thus allowing their Rawlsian rights to be maintained while keeping the rest of the population free from worry.
2Scott Alexander13y
With the gladiatorial games, you seem to have focused on what I intended to be a peripheral point (I'll rephrase it later so this is clearer). The main point is that forcing people to become gladiators against their will requires a system that would almost certainly lower utility (really you'd have to have an institution of slavery or a caste system; any other way and people would revolt against the policy since they would expect a possibility of being to be gladiators themselves). Allowing people who want to, to become gladiators risks the same moral hazards brought up during debates on prostitution - ie maybe they're just doing it because they're too poor or disturbed to have another alternative, and maybe the existence of this option might prevent people from creating a structure in which they do have another alternative. I'm split on the prostitution debate myself, but in a society where people weren't outraged by gladiatorial games, I would be willing to bite the bullet of saying the gladiator question should be resolved the same way as the prostitute question. In a utopian society where no one was poor or disturbed, and where people weren't outraged by gladiatorial games, I would be willing to allow people to become gladiators. (in our current society, I'm not even sure whether American football is morally okay)
0AlexanderRM9y
"The main point is that forcing people to become gladiators against their will requires a system that would almost certainly lower utility (really you'd have to have an institution of slavery or a caste system; any other way and people would revolt against the policy since they would expect a possibility of being to be gladiators themselves)." It seems to me that, specifically, gladiatorial games that wouldn't lower utility would require that people not revolt against the system since they accept the risk of being forced into the games as the price they pay to watch the games. If gladiators are drawn exclusively from the slaves and lower castes, and the people with political power are exempted, then most likely the games are lowering utility. @ Prostitution: Don't the same arguments apply to paid labor of any type?
0Jiro9y
In the case of prostitution, similar arguments apply to some extent to all jobs, but "to some extent" refers to very different degree. My test would be as follows: ask how much people would have to be paid before they would be willing to take the job (in preference to a job of some arbitrary but fixed level of income and distastefulness) Compare that amount to the price that the job actually gets in a free market. The higher the ratio gets, the worse the moral hazard. I would expect both prostitution and being a gladiator to score especially low in this regard.

I skimmed the FAQ (no time to read it in detail right now, though I have bookmarked it for later). I must say that it doesn't address some of the most crucial problems of consequentialism.

Most notably, as far as I can tell, you don't even address the problem of interpersonal utility comparison, which makes the whole enterprise moot from the beginning. Then, as far as I see, you give the game-theoretic concerns only a cursory passing mention, whereas in reality, the inability to account for those is one reason why attempts to derive useful guidelines for a... (read more)

Okay, summary of things I've learned I should change from this feedback:

  1. Fix dead links (I think OpenOffice is muddling open quotes and close quotes again)

  2. Table of contents/navigation.

  3. Stress REALLY REALLY REALLY HARD that this is meant to be an introduction and that there's much more stuff like game theory and decision theory that is necessary for a full understanding of utilitarianism.

  4. Change phlogiston metaphor subsequent to response from Vladimir_M

  5. Remove reference to Eliezer in "warm fuzzies" section.

  6. Rephrase "We have procedures

... (read more)

One small thing: you define consequentialism as choosing the best outcome. I think it makes a big difference, at least to our intuitions, if we instead say something like:

Okay. The moral law is that you should take actions that make the world better. Or, put more formally, when asked to select between possible actions A and B, the more moral choice is the one that leads to the better state of the world by whatever standards you judge states of the world by.

In other words, it's not all about the one point at the pinnacle of all the choices you could ma... (read more)

Some criticism that I hope you will find useful:

First of all, you should mention timeless decision theory, or at least superrationality. These concepts are useful for explaining why people's intuition that they should not steal is not horribly misguided even if the thief cares about himself more and/or would needs it more than the previous owner. You touched on this by pointing out that the economy would collapse if everyone stole all the time, but I would suggest being more explicit.

(3.8) I think the best course of action would be to assign equal value

... (read more)
5NihilCredo13y
Very strongly disagree, and not just because I'm sceptical about both. The article is supposed about consequentialism, not Yvain's particular moral system. It should explain why you should apply your moral analysis to certain data (state of the world) instead of others ("rights"), but it shouldn't get involved in how your moral analysis exactly works. Yvain correctly mentions that you can be a paperclip maximiser and still be a perfect consequentialist.
2bogus13y
UDT and TDT are decision theories, not "moral systems". To the extent that consequentialism necessarily relies on some kind of decision theory--as is clearly the case, since it advocates choosing the optimal actions to take based on their outcomes--a brief mention of CDT, UDT and TDT explaining their relevance to consequentialist ethics (see e.g. the issue of "rule utilitarianism" vs. "action utilitarianism") would have been appropriate.
1NihilCredo13y
I deleted a moderate wall of text because I think I understand what you mean now. I agree that two consequentialists sharing the same moral/utility function, but adopting different decision theories, will have to make different choices. However, I don't think it would be a very good idea to talk about various DTs in the FAQ. That is: showing that "people's intuition that they should not steal is not horribly misguided", by offering them the option of a DT that supports a similar rule, doesn't seem to me like a worthy goal for the document. IMO, people should embrace consequentialism because it makes sense - because it doesn't rely on pies in the sky - not because it can be made to match their moral intuitions. If you use that approach, you could in the same way use the fat man trolley problem to support deontology. I might be misinterpreting you or taking this too far, but what you suggest sounds to me like "Let's write 'Theft is wrong' on the bottom line because that's what is expected by readers and makes them comfortable, then let's find a consequentialist process that will give that result so they will be happy" (note that it's irrelevant whether that process happens to be correct or wrong). I think discouraging that type of reasoning is even more important than promoting consequentialism.
2Eugine_Nier13y
The whole point of CEV, reflexive consistency and the meta-ethics sequence is that morality is based on our intuitions.
1NihilCredo13y
Yes, I personally think that's awful. LessWrong rightly tends to promote being sceptical of one's mere intuitions in most contexts, and I think the same approach should be taken with morality (basically, this post on steroids).
-2Marius13y
If this is to be useful, it would have to read "that our intuitions are based on morality".
4CronoDAS13y
Desire utilitarianism doesn't replace preferences with desires, it replaces actions with desires. It's not a consequentialist system; it's actually a type of virtue ethics. When confronted with the "fat man" trolley problem, it concludes that there are good agents that would push the fat man and other good agents that wouldn't. You should probably avoid mentioning it.
0Scott Alexander13y
Thank you. That makes more sense than the last explanation of it I read.

One problem with the FAQ: The standard metaethics around here, at least EY's metaethics, is not utilitarianism. Utilitarianism says maximize aggregate utility, with "aggregate" defined in some suitable way. EY's metaethics says maximize your own utility (with the caveat that you only have partial information of your utility function), and that all humans have sufficiently similar utility functions.

You get something pretty similar to utilitarianism from that last condition (if everyone has the same utility function and you're maximizing your own ... (read more)

5ata13y
Utilitarianism isn't a metaethic in the first place; it's a family of ethical systems. Metaethical systems and ethical systems aren't comparable objects. "Maximize your utility function" says nothing, for the reasons given by benelliott, and isn't a metaethical claim (nor a correct summary of EY's metaethic); metaethics deals with questions like: EY's metaethic approaches those questions as an unpacking of "should" and other moral symbols. While it does give examples of some of the major object-level values we'd expect to find in ethical systems, it doesn't generate a brand of utilitarianism or a specific utility function. (And "utility" as in what an agent with a (VNM) utility function maximizes (in expectation), and "utility" as in what a utilitarian tries to maximize in aggregate over some set of beings, aren't comparable objects either, and they should be kept cognitively separate.)
0Matt_Simpson13y
Good point. Here's the intuition behind my comment. Classical utilitarianism starts with "maximize aggregate utility" and jumps off from there (Mill calls it obvious, then gives a proof that he admits to be flawed). This opens them up to a slew of standard criticisms (e.g. utility monsters). I'm not very well versed on more modern versions of utilitarianism, but the impression I get is that they do something similar. But, as you point out, all the utilitarian is saying is which utility function you should be maximizing (answer: the aggregate of the utility functions of all suitable agents). EY's metaethics, on the other hand, eventually says something like "maximize this specific utility function that we don't know perfectly. Oh yeah, it's your utility function, and most everyone else's." With a suitable utility function, EY's metaethics seems completely compatable with utilitarianism, I admit, but that seems unlikely. The utilitarian has to take into account the murderer's preference for murder, should that preference actually exist (and not be a confusion). It seems highly unlikely to me that I and most of my fellow humans (which is where the utility function in question exists) care about someone's preference for murder. Even assuming that I/we thought faster, more rationally, etc. Oh, and a note on the "maximize your own utility function" language that I used. I tend to think about ethics in the first person: what should I do. Well, maximize my own utility function/preferences, whatever they are. I only start worrying about your preferences when I find out that they are information about my own preferences (or if I specifically care about your preferences in my own.) This is an explanation of how I'm thinking, but I should know better than to use this language on LW where most people haven't seen it before and so will be confused.
9steven046113y
The answer is the aggregate of some function for all suitable agents, but that function needn't itself be a decision-theoretic utility function. It can be something else, like pleasure minus pain or even pleasure-not-derived-from-murder minus pain.
0Matt_Simpson13y
Ah, I was equating preference utilitarianism with utilitarianism. I still think that calling yourself a utilitarian can be dangerous if only because it instantly calls to mind a list of stock objects (in some interlocutors) that just don't apply given EY's metaethics. It may be worth sticking to the terminology despite the cost though.
2komponisto13y
Be careful not to confuse ethics and metaethics. You're talking about ethical theories here, rather than metaethical theories. (EY's metaethics is a form of cognitivism).
0Scott Alexander13y
I'm glad you brought that up, since it's something I've mentally been circling around but never heard verbalized clearly before. Both the classical and the Yudkowskian system seem to run into some problems that the other avoids, and right now I'm classifying the difference as "too advanced to be relevant to the FAQ". Right now my own opinions are leaning toward believing that under reflective equilibrium my utility function should reference the aggregate utility function and possible be the same as it.
-1benelliott13y
It is literally impossible to maximise anything other than your own utility function, because your utility function is defined as 'that which you maximise'. In that sense EY's meta-ethics is a tautology, the important part is about not knowing what your utility function is.

It is literally impossible to maximise anything other than your own utility function,

No. People can be stupid. They can even be wrong about what their utility function is.

because your utility function is defined as 'that which you maximise'.

It is "that which you would maximise if you weren't a dumbass and knew what you wanted'.

3benelliott13y
Good point. Perhaps I should have said "its impossible to intentionally maximise anything other than your utility function".
2blacktrance11y
People can intentionally maximize anything, including the number of paperclips in the universe. Suppose there was a religion or school of philosophy that taught that maximizing paperclips is deontologically the right thing to do - not because it's good for anyone, or because Divine Clippy would smite them for not doing it, just that morality demands that they do it. And so they choose to do it, even if they hate it.
0benelliott11y
In that case, I would say their true utility function was "follow the deontological rules" or "avoid being smited by divine clippy", and that maximising paperclips is an instrumental subgoal. In many other cases, I would be happy to say that the person involved was simply not utilitarian, if their actions did not seem to maximise anything at all.
0blacktrance11y
If you define "utility function" as "what agents maximize" then your above statement is true but tautological. If you define "utility function" as "an agent's relation between states of the world and that agent's hedons" then it's not true that you can only maximize your utility function.
0benelliott11y
I certainly do not define it the second way. Most people care about something other than their own happiness, and some people may care about their own happiness very little, not at all, or negatively, I really don't see why a 'happiness function' would be even slightly interesting to decision theorists. I think I'd want to define a utility function as "what an agent wants to maximise" but I'm not entirely clear how to unpack the word 'want' in that sentence, I will admit I'm somewhat confused. However, I'm not particularly concerned about my statements being tautological, they were meant to be, since they are arguing against statements that are tautologically false.
0Matt_Simpson13y
Unless I'm misunderstanding you, your definition leaves no room for moral error. Surely it's possible to come up with some utility function under which your actions are maximizing. So everyone has a utility function under which the actions they took were maximizing.
0benelliott13y
I'm not quite sure what you mean.

A contents section, with links to the relivant sections, would aid navigation.

Some thoughts I had while reading (part of) the FAQ:

...our moral intuition...that we should care about other people.

Is it an established fact that this is a natural human intuition, or is it a cultural induced disposition?

If it is a natural human tendency, can we draw the line at caring about other people or do we also care about cute kittens?

Other moral systems are more concerned with looking good than being good...

Signaling is a natural human tendency. Just like caring about other people, humans care how they appear to other people.

Why should a m... (read more)

On 3.4: The term "warm fuzzies" was invented before EY was born - I remember the story from high school.

With the power of Google, I found the story on the web, it is by Claude Steiner, copyrighted in 1969.

Not really philosophical feedback, but all the links except the one to the LW metaethics sequence seem to be broken for me.

1Sniffnoy13y
Seems to be because they were written using "smart quotes" instead of actual quote marks.

Can this be an article on LW please? This link isn't very pretty and the raikoth link doesn't work. Thanks!

3Amanda de Vasconcellos3y
This seems to be the most recent: https://web.archive.org/web/20161020171351/http://www.raikoth.net/consequentialism.html

If God made His rules arbitrarily, then there is no reason to follow them except for self-interest (which is hardly a moral motive)

A perfectly moral motive, in the sense you are using the term.

0orthonormal13y
Yeah, this jumped out at me too, but I think that expanding on that caveat would probably lose the intended audience.
0Vladimir_Nesov13y
I would just remove the parenthetical and/or change the example. Possibly explain the distinction between terminal and instrumental goals first.

I'm looking forward to live discussion of this topic at the Paris meetup. :)

Meanwhile, I've read through it, more closely. Much of it seems, not necessarily right, but at least unobjectionable - it raises few red flags. On the other hand, I don't think it makes me much the wiser about the advantages of consequentialism.

Paras 7.2 and 7.3 (the slavery and gladiator questions) left me with an odd impression. The "test" you propose in both cases is more or less the same as Rawls' Veil of Ignorance. So at that point I was wondering, if you apply Rawls... (read more)

2AlexanderRM9y
"Paras 7.2 and 7.3 (the slavery and gladiator questions) left me with an odd impression. The "test" you propose in both cases is more or less the same as Rawls' Veil of Ignorance. So at that point I was wondering, if you apply Rawls' procedure to determine what is a preferable social contract, perhaps you're a Rawlsian more than you're a consequentialist. :) BTW, are you familiar with Rawls' objections to (classical) utilitarianism?" I can't speak for Yvain but as someone who fully agreed with his use of that test, I would describe myself as both a Rawlsian (in the sense of liking the "veil of ignorance" concept) and a Utilitarian. I don't really see any conflict between the two. I think maybe the difference between my view and that of Rawls is that I apply something like the Hedonic Treadmill fully (despite being a Preference Utilitarian), which essentially leads to Yvain's responses. ...Actually I suppose I practically define the amount of Utility in a world by whether it would be better to live there, so maybe it would in fact be better to describe me as a Rawslian. I still prefer to think of myself as a Utilitarian with a Rawlsian basis for my utility function, though (essentially I define the amount of utility in a world as "how desirable it would be to be born as a random person in that world). I think it's that Utilitarianism sounds easier to use as a heuristic for decisions, whereas calling yourself a Rawlsian requires you to go one step further back every time you analyze a thought experiment.
0Morendil9y
This later piece is perhaps relevant.
0Scott Alexander13y
I've responded to some of Vladmir's comments, but just a few things you touched on that he didn't: Utility monsters: if a utility monster just means someone who gets the same amount of pleasure from an ice cream that I get from an orgasm, then it just doesn't seem that controversial to me that giving them an ice cream is as desirable as giving me an orgasm. Once we get to things like "their very experience is a million times stronger and more vivid than you could ever imagine" we're talking a completely different neurological makeup that can actually hold more qualia, which is where the ant comes in. I don't see a philosophical distinction between the morality an individual should use and the morality a government should use (although there's a very big practical distinction since governments are single actors in their own territories and so can afford to ignore some game theoretic and decision theoretic principles that individuals have to take into account). The best state of the world is the best state of the world, no matter who's considering it. I use mostly examples from government because moral dilemmas on the individual level are less common, less standardized, and less well-known.

suppose some mathematician were to prove, using logic, that it was moral to wear green clothing on Saturday. There are no benefits to anyone for wearing green clothing on Saturday, and it won't hurt anyone if you don't. But the math apparently checks out. Do you shrug and start wearing green clothing? Or do you say “It looks like you have done some very strange mathematical trick, but it doesn't seem to have any relevance to real life and I feel no need to comply with it"?

Supposing a consequentialist were to prove using maths that you should be ... (read more)

This isn't so much a critique against consequentialism as the attempt at creating objective moral systems in general. I would love for the world to follow a particular moral order (namely mine). But there are people who, for what I would see as being completely sane reasons, disagree with me. On the edges, I have no problem writing mass murderers off as being insane. Beyond that, though, in the murky middle, there are a number of moral issues (and how is that dividing line drawn? Is it moral to have anything above a sustenance level meal if others are star... (read more)

3fubarobfusco13y
What would it mean for the PETA member to be right? Does it just mean that the PETA member has sympathy for chickens, whereas you and I do not? Or is there something testable going on here? It doesn't seem to me that the differences between the PETA members, us, and the Romans, are at all unclear. They are differences in the parties' moral universe, so to speak: the PETA member sees a chicken as morally significant; you and I see a Scythian, Judean, or Gaul as morally significant; and the Roman sees only another Roman as morally significant. (I exaggerate slightly.) A great deal of moral progress has been made through the expansion of the morally significant; through recognition of other tribes (and kinds of beings) as relevant objects of moral concern. Richard Rorty has argued that it is this sympathy or moral sentiment — and not the knowledge of moral facts — which makes the practical difference in causing a person to act morally; and that this in turn depends on living in a world where you can expect the same from others. This is an empirical prediction: Rorty claims that expanding people's moral sympathies to include more others, and giving them a world in which they can expect others to do the same in turn, is a more effective way of producing good moral consequences, than moral philosophizing is. I wonder what sort of experiment would provide evidence one way or the other.
3zaph13y
That's an interesting link to Rorty; I'll have to read it again in some more detail. I really appreciated this quote: That really seems to hit it for me. That flexibility, the sense that we can step beyond being warlike, or even calculating, seems to be critical to what morals are all about. I don't want to make it sound like I'm against a generally moral culture, where happiness is optimized (or some other value I like personally). I just don't think moral philosophizing would get us there. I'll have to read up more on the moral sentiments approach. I have read some of Rorty's papers, but not his major works. I would be interested to see these ideas of his paired with meme theory. Describing moral sentiment as a meme that enters a positive feedback loop where groups that have it survive longer than ones that don't seems very plausible to me. I'll have to think more about your PETA question. I think it goes beyond sympathy. I don't know how to test the positions though. I don't think viewing chickens as being equally morally significant would lead to a much better world (for humans - chickens are a different matter). Even with the moral sentiment view, I don't see how each side could come to a clear resolution.
-2Nornagest13y
I do wonder what would constitute "good moral consequences" in this context. If it's being defined as the practical extension of goodwill, or of its tangible signs, then the argument seems very nearly tautological.
3fubarobfusco13y
Not to put too fine a point on it, but part of Rorty's argument seems to be that if you don't already have a reasonably good sense for what "good moral consequences" would be, then you're part of the problem. Rorty claims that philosophical ethics has been largely concerned with explaining to "psychopaths" like Thrasymachus and Callicles (the sophists in Plato's dialogues who argue that might makes right) why they would do better to be moral; but that the only way for morality to win out in the real world is to avoid bringing agents into existence that lack moral sentiment: As far as I can tell, this fits perfectly into the FAI project, which is concerned with bringing into existence superhuman AI that does have a sense of "good moral consequences" before someone else creates one that doesn't.
1Nornagest13y
You can't write an algorithm based on "if you don't get it, you're part of the problem". You can get away with telling that to your children, sort of, but only because children are very good at synthesizing behavioral rules from contextual cues. Rorty's advice might be useful as a practical guide to making moral humans, but it only masks the underlying issue: if the only way for morality to win in the real world is to avoid bringing amoral agents into existence, then there must already exist a well-bounded set of moral utility functions for agents to follow. It doesn't tell us much about what such a set might contain, giving only a loose suggestion that good morality functions tend to be relatively subject-independent. Now, to encode a member of such a set into an AI (which may or may not end up being Friendly depending on how well those functions generalize outside the human problem domain), you need a formalization of it. To teach one implicitly, you need a formalization of something analogous (but not necessarily identical) to the social intuitions that human children use to derive their morals, which is most likely a harder problem. And if you have such a formalization, explaining an instance of moral behavior to a rational sociopath is as easy as running it on particular inputs. Presented with an irrational sociopath you're out of luck, but I can't think of any ethical systems that don't have that problem.
[-][anonymous]13y20

Somewhat cringeworthy:

This term ("warm fuzzies"), invented as far as I know by computer ethicist Eliezer Yudkowsky...

I'm pretty sure not. See here for a reference dating the term itself back to 1969; also, it's been in use in geek culture for quite a while.

Link doesn't seem to work.

Here is what David Pearce has to say about the FAQ (via Facebook):

Lucid. But the FAQ (and lesswrong) would be much stronger if it weren't shot through with anthropocentric bias...

Suggestion: replace "people" with "sentient beings".

For the rational consequentialist, the interests of an adult pig are (other things being equal) as important as a human toddler. Sperm whales are probably more sentient than human toddlers; chickens probably less.

Ethnocentric bias now seems obvious. If the FAQ said "white people" throughout rather

... (read more)
4Vaniver13y
Because, in order to be a rational consequentialist, one needs to forget that human toddlers grow into adult humans and adult pigs grow into... well, adult pigs.
1NihilCredo13y
Careful, that leads straight into an abortion debate (via "if you care that much about potential development, how much value do you give a fetus / embryo / zygote?").
2Vaniver13y
I am aware. If the thought process involved is "We can't assign values to future states because then we might be opposed to abortion" then I recommend abandoning that process. If the thought process is just "careful, there's a political schism up ahead" that fails to realize we are already in a political schism about animal rights.
0[anonymous]13y
There is more than one way to interpret your original objection and I wonder whether you and NihilCredo are talking about the same thing. Consider two situations: (1) The toddler and the pig are in mortal danger. You can save only one of them. (2) The toddler and the pig will both live long lives but they're about to experience extreme pain. Once again, you can prevent it for only one of them. I think it's correct to take future states into consideration in the second case where we know that there will be some suffering in the future and we can minimize it by asking whether humans or pigs are more vulnerable to suffering resulting from past traumas. But basing the decision of which being gets to have the descendants of its current mind-state realized in the future on the awesomeness of those descendants rather than solely on the current mind-state seems wrong. And the logical conclusion of that wouldn't be opposition to abortion, it would be opposition to anything that isn't efficient procreation.
1Vaniver13y
Why throw away that information? Because it's about the future?
0[anonymous]13y
I don't know how to derive my impression from first principles. So the answer has to be: because my moral intuitions tell me to do so. But they only tell me so in this particular case -- I don't have a general rule of disregarding future information.
1Vaniver13y
Ok. I will try to articulate my reasoning, and see if that helps clarify your moral intuitions: a "life" is an ill-defined concept, compared to a "lifespan." So when we have to choose one of two individuals, the way our choice changes the future depends on the lifespans involved. If the choice is between saving someone 10 years old with 70 years left or someone 70 years old with 10 years left, then one choice results in 60 more years of aliveness than the other! (Obviously, aliveness is not the only thing we care about, but this is good enough for a first approximation to illustrate the idea.) And so the state between now and the next second (i.e. the current mind-state) is just a rounding error when you look at the change to the whole future; in the future of the human toddler it is mostly not a human toddler, whereas in the future of the adult pig it is mostly an adult pig. If we prefer adult humans to adult pigs, and we know that adult pigs have a 0% chance of becoming adult humans and human toddlers have a ~98% chance of becoming adult humans, then combining those pieces of knowledge gives us a clear choice. If this is not a general principle, it may be worthwhile to try and tease out what's special about this case, and why that seems special. It may be that this is a meme that's detached from its justification, and that you should excise it, or that there is a worthwhile principle here you should apply in other cases.
0NihilCredo13y
I meant the latter. Your assessment is correct, although the mind-killing ability of a real-life debate (prenatal abortion y/n) is significantly higher than that of a largely hypothetical debate (equalising the rights of toddlers and smart animals).
3wedrifid13y
What would leap off the page is the 'white people' phrase. Making that explicit would be redundant and jarring. Perhaps even insulting. It should have been clear what 'people' meant without specifying color.
1Tripitaka13y
In fact, adult pigs are of more concern than <2 year old toddlers; they pass a modified version of the mirror-test and thus seem to be selfconscious. http://en.wikipedia.org/wiki/Pigs#cite_note-AnimalBehaviour-10
0Scott Alexander13y
I can't even use nonstandard pronouns without it impeding readability, so I think I'm going to sacrifice precision and correctness for the sake of ease-of-understanding here.
0Sniffnoy13y
"People" need not mean "humans", it can mean "people". Also, people should really stop using the word "sentient". It's a useless word that seems to serve no purpose beyond causing people to get intelligence and consciousness confused. (OK, Pearce does seem pretty clear on what he means here; he doesn't seem to have been confused by it himself. Nonetheless, it's still aiding in the confusion of others.)
0Emile13y
Now we know Clippy's true identity! (I kid, I kid. Thinking correctly about morality applied to non-human sentient beings is a Tough Problem)

As far as I can tell, she means “Better that everyone involved dies as long as you follow some arbitrary condition I just made up, than that most people live but the arbirary condition is not satisfied.” Do you really want to make your moral decisions like this?

The whole problem is that everything is framed in moral terminology.

The trolley problem really isn't any different from the prisoner's dilemma with regard to human nature.

On one side there is the game theoretic pareto-suboptimal solution that a rational agent would choose and on the other side ... (read more)

Here's something that, if it's not a frequently asked question, ought to be one. What do you mean by "state of the world" - more specifically, evaluated at what time, and to what degree of precision?

Some utilitarians argue that we should give money to poorer people because it will help them much more than the same amount will help us. An obvious question is "how do you know"? You don't typically know the consequences that a given action will have over time, and there may be different consequences depending on what "state of the wor... (read more)

6.4) In the United States at least there are so many laws that it's not possible to live a normal life without breaking many of them.

See: http://www.harveysilverglate.com/Books/tabid/287/Default.aspx

7.1) you could come up with better horribly "seeming" outcomes that consequentialism would result in. For example, consequentialists who believe in heaven and hell and think they have insight into how to get people into heaven would be willing to do lots of nasty things to increase the number of people who go to heaven. Also, dangerous sweatsho... (read more)

7NihilCredo13y
This is actually a serious, mainstream policy argument that I've heard several times. It goes like "If you ban sweatshops, sweatshop workers won't have better jobs; they'll just revert to subsistence farming or starve to death as urban homeless". I'm not getting into whether it's a correct analysis (and it probably depends on where and how exactly 'sweatshops' are 'banned'), but my point is that it wouldn't work quite well as an "outrageous" example.
1James_Miller13y
That's why I wrote "horribly 'seeming'" and not just horribly.
0AlexanderRM9y
Interesting observation: You talked about that in terms the effects of banning sweatshops, rather than talking about it in terms of the effects of opening them. It's of course the exact same action and the same result in every way- deontological as well as consequentialist- but it changes from "causing people to work in horrible sweatshop conditions" to "leaving people to starve to death as urban homeless", so it switches around the "killing vs. allowing to die" burden. (I'm not complaining, FYI, I think it's actually an excellent technique. Although maybe it would be better if we came up with language to list two alternatives neutrally with no burden of action.)
2AlexanderRM9y
"consequentialists who believe in heaven and hell and think they have insight into how to get people into heaven would be willing to do lots of nasty things to increase the number of people who go to heaven." I fully agree with this (as someone who doesn't believe in heaven and hell, but is a consequentialist), and also would point out that it's not that different from the way many people who believe in heaven and hell already act (especially if you look at people who strongly believe in them; ignore anyone who doesn't take their own chances of heaven/hell into account in their own decisions). In fact I suspect that even from an atheistic, humanist viewpoint, consequentialism on this one would have been better in many historical situations than the way people acted in real life; if a heathen will go to hell but can be saved by converting them to the True Faith, then killing heathens becomes an utterly horrific act. Of course, it's still justified if it allows you to conquer the heathens and forcibly convert them all, as is killing a few as examples if it gets the rest to convert, but that's still better than the way many European colonizers treated native peoples in many cases.

As a philosophy student, complicated though the FAQ is I think I could knock down the arguments therein fairly easily. That's kind of irrelevant, however.

More importantly, I'm looking for the serious argument for Consequentialism on Lesswrong. Could somebody help me out here?

0shminux11y
If not consequentialism, what's your preferred ethics?
0Carinthium11y
I haven't fully made up my mind yet- it would be inaccurate to place me in any camp for that reason.

Some observations... There is no discussion in your FAQ of the distinctions between utilitarianism and other forms of consequentialism, or between act consequentialism and rule consequentialism, or between "partial" and "full" rule consequentialism. See http://plato.stanford.edu/entries/consequentialism-rule/ for more on this.

Where you discuss using "heuristics" for moral decision making (rather than trying to calculate the consequences of each action), you are heading into the "partial rule consequentialism" camp. ... (read more)

Part One: Methodology: Why think that intuitions are reliable? What is reflective equilibrium, other than reflecting on our intuitions? If it is some process by which we balance first-order intuitions against general principles, why think this process is reliable? Metaethics: Realism vs. Error vs. Expressivism?

Part Two: 2.6 I don't see the collapse - an axiology may be paired with different moralities - e.g. a satisficing morality, or a maximizing morality. Maybe all that is meant by the collapse is that the right is a function of the good? Then 'col... (read more)

0Scott Alexander13y
P1: Intuitions being "reliable" requires that the point of intuitions be to correspond to something outside themselves. I'm not sure moral intuitions have this point. P2: Point taken. P4.2: I agree with taking actions that make the world better instead of best and will rephrase. I don't understand the point of your second sentence. 4.4: Concern about not using others as means, or doing/allowing distinctions, seem to me common-sensically not to be about states of the world. I'm not sure what further argument is possible let alone necessary. The discussion of guilt only says that's the only state-of-the-world-relevant difference. 5.4: Would you agree that most of the philosophically popular consequentialisms (act, rule, preference, etc.) usually converge? 7.3 and below: I don't think slavery and gladiators are necessarily wrong. I can imagine situations in which they would be okay (I've mentioned some for gladiators above) and I remain open to moral argument from people who want to convince me they're okay in our own world (although I don't expect this argument to succeed any more than I expect to be convinced that the sky is green). If the belief that slavery is wrong is not an axiom, but instead derives from deeper moral principles that when formalized under reflective equilibrium give you consequentialism, then I think it's fair to say that consequentialism proves they are wrong, but that in a counterfactual world where consequentialism proved they were right, I would either have intuitions that they were right, or be willing to discard my intuition that they were wrong after considering the consequentialist arguments against it.

I like the idea of a non-technical explanation of consequentialism, but I worry that many important distinctions will be conflated or lost in the process of generating something that reads well and doesn't require that the reader to spend a lot of time thinking about the subject by themselves before it makes sense.

The issue that stands out the most to me is what you write about axiology. The point you seem to want to get across, which is what I would consider to be the core of consequentilalism in one sentence, is that "[...]our idea of “the good” sho... (read more)

[-][anonymous]13y00

The basic thesis is that consequentialism is the only system which both satisfies our moral intuition that morality should make a difference to the real world, and that we should care about other people.

Why am I supposed to adopt beliefs about ethics and decision theory based on how closely they match my intuitions, but I am not supposed to adopt beliefs about quantum physics or cognitive psychology based on how closely they match my intuitions? What warrants this asymmetry?

Also, your FAQ lacks an answer to the most important question regarding utilitarianism: By what method could we aggregate the utilities of different persons?

Fairly good summary. I don't mind the FAQ structure. The writing style is good, and the subject matter suggests obvious potential to contribute to the upcoming Wiki Felicifia in some way. Now as good as the essay is, I have some specific feedback:

In section 2.2, I wonder if you could at put your point more strongly...

you wrote: if morality is just some kind of metaphysical rule, the magic powers of the Heartstone should be sufficient to cancel that rule and make morality irrelevant. But the Heartstone, for all its legendary powers, is utterly worthless and... (read more)

[-][anonymous]13y00

On this site it's probably just me, but I just can't (or won't) bring myself to assigning a non-zero value to people I do not even know. Since the FAQ rest on the assumption that this is not the case it's practically worthless to me. This could be as it should, if people like me just aren't your target audience in which case it would be helpful to have such a statement in the 3.1 answer.

Edit: Thinking about it, this is probably not the case. If all of the people except those I know were to die, I might feel sad about it. So my valuation of them might actually be non-zero.

5wedrifid13y
In that case let me introduce myself. I'm Cameron. Come from Melbourne. I like walks on the beach. Do I get epsilon value?

OT: The format seemed familiar and then I looked back and found out it was because I had read your libertarian FAQ. Best anti-libertarian FAQ I've seen!

3timtyler13y
http://www.raikoth.net/libertarian.html The home page references on these pages should be links.
0NihilCredo13y
I've seen the link before, but I hadn't read it. The best part was probably the one about formaldehyde, I'll be stealing that in the future. The most ironic part has got to be this sentence: in a freaking critique of libertarianism.
-1[anonymous]13y
http://www.raikoth.net/libertarian.html

Or, put more formally, when asked to select between several possible actions A, B and C, the most moral choice is the one that leads to the best state of the world by whatever standards you judge states of the world by.

This is the key definition, yet it doesn't seem to actually say anything. Moral choice = the choice that makes you happy. This is a rejection of ethics, not an ethical system. If it were, it would be called individual consequentialism, that is, "Forget all this ethics tripe."

Yet after that pretense of doing away with all ethics... (read more)

Isn't the difference between good and right where decision theory lives?