OK, I've read the whole FAQ. Clearly, a really detailed critique would have to be given at similar length. Therefore, here is just a sketch of the problems I see with your exposition.
For start, you use several invalid examples, or at least controversial examples that you incorrectly present as clear-cut. For example, the phlogiston theory was nothing like the silly strawman you present. It was a falsifiable scientific theory that was abandoned because it was eventually falsified (when it was discovered that burning stuff adds mass due to oxidation, rather than losing mass due to escaped phlogiston). It was certainly a reductionist theory -- it attempted to reduce fire (which itself has different manifestations) and the human and animal metabolism to the same underlying physical process. (Google "Becher-Stahl theory".) Or, at another place, you present the issue of "opposing condoms" as a clear-cut case of "a horrendous decision" from a consequentialist perspective -- although in reality the question is far less clear.
Otherwise, up to Section 4, your argumentation is passable. But then it goes completely off the rails. I'll list just a few main issues...
Each of these issues could be the subject of a separate lengthy discussion, but I'll try to address them as succinctly as possible:
Re: phlogiston. Yes, Eliezer's account is inaccurate, though it seems like you have inadvertently made even more out of it. Generally, one recurring problem in the writings of EY (and various other LW contributors) is that they're often too quick to proclaim various beliefs and actions as silly and irrational, without adequate fact-checking and analysis.
Re: interpersonal utility aggregation/comparison. I don't think you can handwave this away -- it's a fundamental issue on which everything hinges. For comparison, imagine someone saying that your consequentialism is wrong because it's contrary to God's commands, and when you ask how we know that God exists and what his commands are, they handwave it by saying that theologians have some ideas on how to answer these questions. In fact, your appeal to authority is worse in an important sense, since people are well aware that theologians are in disagreement on these issues and have nothing like definite unbiased answers backed by evidence, whereas your answer will leave many people thinking falsely that
However, this idea seems to me fundamentally consequentialist - to make this objection, one starts by assuming consequentialist principles, but then saying they can't be put into action and so we should retreat from pure consequentialism on consequentialist grounds.
Fair enough. Though I can grant this only for consequentialism in general, not utilitarianism -- unless you have a solution to the fundamental problem of interpersonal utility comparison and aggregation. (In which case I'd be extremely curious to hear it.)
I still don't understand why you think consequentialism implies or even suggests centralized economic planning when we both agree centralized economic planning would have bad consequences.
I gave it as a historical example of a once wildly popular bad idea that was a product of consequentialist thinking. Of course, as you point out, that was an instance of flawed consequentialist thinking, since the consequences were in fact awful. The problem however is that these same patterns of thinking are by no means dead and gone -- it is only that some of their particular instances have been so decisively discredited in practice that nobody serious supports them any more...
This essay by David Friedman is probably the best treatment of the subject of Schelling points in human relations:
http://www.daviddfriedman.com/Academic/Property/Property.html
Applying these insights to the fat man/trolley problem, we see that the horrible thing about pushing the man is that it transgresses the gravest and most terrible Schelling point of all: the one that defines unprovoked deadly assault, whose violation is understood to give the other party the licence to kill the violator in self-defense. Normally, humans see such crucial Schelling points as sacrosanct. They are considered violable, if at all, only if the consequentialist scales are loaded to a far more extreme degree than in the common trolley problem formulations. Even in the latter case, the act will likely cause serious psychological damage. This is probably an artifact of additional commitment not to violate them, which may also be a safeguard against rationalizations.
Now, the utilitarian may reply that this is just human bias, an unfortunate artifact of evolutionary psychology, and we’d all be better off if people instead made decisions according to pure utilitarian calculus. However, even ignoring all th...
The switch example is more difficult to analyze in terms of the intuitions it evokes. I would guess that the principle of double effect captures an important aspect of what's going on, though I'm not sure how exactly. I don't claim to have anything close to a complete theory of human moral intuitions.
In any case, the fact that someone who flipped the switch appears much less (if at all) bad compared to someone who pushed the fat man does suggest strongly that there is some important game-theoretic issue involved, or otherwise we probably wouldn't have evolved such an intuition (either culturally or genetically). In my view, this should be the starting point for studying these problems, with humble recognition that we are still largely ignorant about how humans actually manage to cooperate and coordinate their actions, instead of naive scoffing at how supposedly innumerate and inconsistent our intuitions are.
I reluctantly agree. I like Spivac pronouns, but I since most people haven't even heard of them, using them probably makes your FAQ less effective for most people.
Some "first impressions" feedback: though it has a long tradition in geekdom, and occasionally works well, I for one find the fake-FAQ format extremely offputting. These aren't "frequently" asked questions - they are your questions, and not so much questions as lead-ins to various parts of an essay.
I'd be more interested if you started off with a concise definition of what you mean by "consequentialism". (If you were truly writing a FAQ, that would be the most frequently asked question of all). As it is, the essay starts losing me by the time it gets to "part one" - I skipped ahead to see how long I should expect to spend on preliminaries before getting to the heart of the matter.
(Usually, when you feel obliged to make a self-deprecating excuse such as "Sorry, I fell asleep several pages back" in your own writing, there's a structure issue that you want to correct urgently.)
I think your analysis of the 'gladiatorial objection' misses something:
I hope I'm not caricaturing you too much if I condense your rebuttal as follows: "People wouldn't really enjoy watching gladiators fight to the death. In fact, they'd be sickened and outraged. Therefore, utilitarianism does not endorse gladiatorial games after all."
But there's a problem here: If the negative reaction to gladiatorial games is itself partly due to analyzing those games in utilitarian terms then we have a feedback loop.
Games are outrageous --> decrease utility --> are outrageous --> etc.
But this could just as well be a 'virtuous circle':
Games are noble --> increase utility --> are noble --> etc.
If we started off with a society like that of ancient Rome, couldn't it be that the existence of gladiatorial games is just as 'stable' (with respect to the utilitarian calculus) as their non-existence in our own society?
Couldn't it be that we came to regard such bloodsports as being 'immoral' for independent, non-utilitarian reasons*? And then once this new moral zeitgeist became prevalent, utilitarians could come along and say "Aha! Far from being 'fun', just look at how m...
- Sadly my knowledge of history is too meagre to venture an account of how this actually happened.
Well, we have Christianity to blame for the decline of gladiatorial games.
Incidentally, now that we know Christianity to be false and thus gladiatorial games were banned under false pretenses, does recursive consistency require us to re-examine whether they are a good idea?
I skimmed the FAQ (no time to read it in detail right now, though I have bookmarked it for later). I must say that it doesn't address some of the most crucial problems of consequentialism.
Most notably, as far as I can tell, you don't even address the problem of interpersonal utility comparison, which makes the whole enterprise moot from the beginning. Then, as far as I see, you give the game-theoretic concerns only a cursory passing mention, whereas in reality, the inability to account for those is one reason why attempts to derive useful guidelines for a...
Okay, summary of things I've learned I should change from this feedback:
Fix dead links (I think OpenOffice is muddling open quotes and close quotes again)
Table of contents/navigation.
Stress REALLY REALLY REALLY HARD that this is meant to be an introduction and that there's much more stuff like game theory and decision theory that is necessary for a full understanding of utilitarianism.
Change phlogiston metaphor subsequent to response from Vladimir_M
Remove reference to Eliezer in "warm fuzzies" section.
Rephrase "We have procedures
One small thing: you define consequentialism as choosing the best outcome. I think it makes a big difference, at least to our intuitions, if we instead say something like:
Okay. The moral law is that you should take actions that make the world better. Or, put more formally, when asked to select between possible actions A and B, the more moral choice is the one that leads to the better state of the world by whatever standards you judge states of the world by.
In other words, it's not all about the one point at the pinnacle of all the choices you could ma...
Some criticism that I hope you will find useful:
First of all, you should mention timeless decision theory, or at least superrationality. These concepts are useful for explaining why people's intuition that they should not steal is not horribly misguided even if the thief cares about himself more and/or would needs it more than the previous owner. You touched on this by pointing out that the economy would collapse if everyone stole all the time, but I would suggest being more explicit.
...(3.8) I think the best course of action would be to assign equal value
One problem with the FAQ: The standard metaethics around here, at least EY's metaethics, is not utilitarianism. Utilitarianism says maximize aggregate utility, with "aggregate" defined in some suitable way. EY's metaethics says maximize your own utility (with the caveat that you only have partial information of your utility function), and that all humans have sufficiently similar utility functions.
You get something pretty similar to utilitarianism from that last condition (if everyone has the same utility function and you're maximizing your own ...
It is literally impossible to maximise anything other than your own utility function,
No. People can be stupid. They can even be wrong about what their utility function is.
because your utility function is defined as 'that which you maximise'.
It is "that which you would maximise if you weren't a dumbass and knew what you wanted'.
Some thoughts I had while reading (part of) the FAQ:
...our moral intuition...that we should care about other people.
Is it an established fact that this is a natural human intuition, or is it a cultural induced disposition?
If it is a natural human tendency, can we draw the line at caring about other people or do we also care about cute kittens?
Other moral systems are more concerned with looking good than being good...
Signaling is a natural human tendency. Just like caring about other people, humans care how they appear to other people.
Why should a m...
On 3.4: The term "warm fuzzies" was invented before EY was born - I remember the story from high school.
With the power of Google, I found the story on the web, it is by Claude Steiner, copyrighted in 1969.
Not really philosophical feedback, but all the links except the one to the LW metaethics sequence seem to be broken for me.
If God made His rules arbitrarily, then there is no reason to follow them except for self-interest (which is hardly a moral motive)
A perfectly moral motive, in the sense you are using the term.
I'm looking forward to live discussion of this topic at the Paris meetup. :)
Meanwhile, I've read through it, more closely. Much of it seems, not necessarily right, but at least unobjectionable - it raises few red flags. On the other hand, I don't think it makes me much the wiser about the advantages of consequentialism.
Paras 7.2 and 7.3 (the slavery and gladiator questions) left me with an odd impression. The "test" you propose in both cases is more or less the same as Rawls' Veil of Ignorance. So at that point I was wondering, if you apply Rawls...
suppose some mathematician were to prove, using logic, that it was moral to wear green clothing on Saturday. There are no benefits to anyone for wearing green clothing on Saturday, and it won't hurt anyone if you don't. But the math apparently checks out. Do you shrug and start wearing green clothing? Or do you say “It looks like you have done some very strange mathematical trick, but it doesn't seem to have any relevance to real life and I feel no need to comply with it"?
Supposing a consequentialist were to prove using maths that you should be ...
This isn't so much a critique against consequentialism as the attempt at creating objective moral systems in general. I would love for the world to follow a particular moral order (namely mine). But there are people who, for what I would see as being completely sane reasons, disagree with me. On the edges, I have no problem writing mass murderers off as being insane. Beyond that, though, in the murky middle, there are a number of moral issues (and how is that dividing line drawn? Is it moral to have anything above a sustenance level meal if others are star...
Here is what David Pearce has to say about the FAQ (via Facebook):
...Lucid. But the FAQ (and lesswrong) would be much stronger if it weren't shot through with anthropocentric bias...
Suggestion: replace "people" with "sentient beings".
For the rational consequentialist, the interests of an adult pig are (other things being equal) as important as a human toddler. Sperm whales are probably more sentient than human toddlers; chickens probably less.
Ethnocentric bias now seems obvious. If the FAQ said "white people" throughout rather
As far as I can tell, she means “Better that everyone involved dies as long as you follow some arbitrary condition I just made up, than that most people live but the arbirary condition is not satisfied.” Do you really want to make your moral decisions like this?
The whole problem is that everything is framed in moral terminology.
The trolley problem really isn't any different from the prisoner's dilemma with regard to human nature.
On one side there is the game theoretic pareto-suboptimal solution that a rational agent would choose and on the other side ...
Here's something that, if it's not a frequently asked question, ought to be one. What do you mean by "state of the world" - more specifically, evaluated at what time, and to what degree of precision?
Some utilitarians argue that we should give money to poorer people because it will help them much more than the same amount will help us. An obvious question is "how do you know"? You don't typically know the consequences that a given action will have over time, and there may be different consequences depending on what "state of the wor...
6.4) In the United States at least there are so many laws that it's not possible to live a normal life without breaking many of them.
See: http://www.harveysilverglate.com/Books/tabid/287/Default.aspx
7.1) you could come up with better horribly "seeming" outcomes that consequentialism would result in. For example, consequentialists who believe in heaven and hell and think they have insight into how to get people into heaven would be willing to do lots of nasty things to increase the number of people who go to heaven. Also, dangerous sweatsho...
As a philosophy student, complicated though the FAQ is I think I could knock down the arguments therein fairly easily. That's kind of irrelevant, however.
More importantly, I'm looking for the serious argument for Consequentialism on Lesswrong. Could somebody help me out here?
Some observations... There is no discussion in your FAQ of the distinctions between utilitarianism and other forms of consequentialism, or between act consequentialism and rule consequentialism, or between "partial" and "full" rule consequentialism. See http://plato.stanford.edu/entries/consequentialism-rule/ for more on this.
Where you discuss using "heuristics" for moral decision making (rather than trying to calculate the consequences of each action), you are heading into the "partial rule consequentialism" camp. ...
Part One: Methodology: Why think that intuitions are reliable? What is reflective equilibrium, other than reflecting on our intuitions? If it is some process by which we balance first-order intuitions against general principles, why think this process is reliable? Metaethics: Realism vs. Error vs. Expressivism?
Part Two: 2.6 I don't see the collapse - an axiology may be paired with different moralities - e.g. a satisficing morality, or a maximizing morality. Maybe all that is meant by the collapse is that the right is a function of the good? Then 'col...
I like the idea of a non-technical explanation of consequentialism, but I worry that many important distinctions will be conflated or lost in the process of generating something that reads well and doesn't require that the reader to spend a lot of time thinking about the subject by themselves before it makes sense.
The issue that stands out the most to me is what you write about axiology. The point you seem to want to get across, which is what I would consider to be the core of consequentilalism in one sentence, is that "[...]our idea of “the good” sho...
The basic thesis is that consequentialism is the only system which both satisfies our moral intuition that morality should make a difference to the real world, and that we should care about other people.
Why am I supposed to adopt beliefs about ethics and decision theory based on how closely they match my intuitions, but I am not supposed to adopt beliefs about quantum physics or cognitive psychology based on how closely they match my intuitions? What warrants this asymmetry?
Also, your FAQ lacks an answer to the most important question regarding utilitarianism: By what method could we aggregate the utilities of different persons?
Fairly good summary. I don't mind the FAQ structure. The writing style is good, and the subject matter suggests obvious potential to contribute to the upcoming Wiki Felicifia in some way. Now as good as the essay is, I have some specific feedback:
In section 2.2, I wonder if you could at put your point more strongly...
you wrote: if morality is just some kind of metaphysical rule, the magic powers of the Heartstone should be sufficient to cancel that rule and make morality irrelevant. But the Heartstone, for all its legendary powers, is utterly worthless and...
On this site it's probably just me, but I just can't (or won't) bring myself to assigning a non-zero value to people I do not even know. Since the FAQ rest on the assumption that this is not the case it's practically worthless to me. This could be as it should, if people like me just aren't your target audience in which case it would be helpful to have such a statement in the 3.1 answer.
Edit: Thinking about it, this is probably not the case. If all of the people except those I know were to die, I might feel sad about it. So my valuation of them might actually be non-zero.
OT: The format seemed familiar and then I looked back and found out it was because I had read your libertarian FAQ. Best anti-libertarian FAQ I've seen!
Or, put more formally, when asked to select between several possible actions A, B and C, the most moral choice is the one that leads to the best state of the world by whatever standards you judge states of the world by.
This is the key definition, yet it doesn't seem to actually say anything. Moral choice = the choice that makes you happy. This is a rejection of ethics, not an ethical system. If it were, it would be called individual consequentialism, that is, "Forget all this ethics tripe."
Yet after that pretense of doing away with all ethics...
I'm looking forward to live discussion of this topic at the Paris meetup. :)
Meanwhile, I've read through it, more closely. Much of it seems, not necessarily right, but at least unobjectionable - it raises few red flags. On the other hand, I don't think it makes me much the wiser about the advantages of consequentialism.
Paras 7.2 and 7.3 (the slavery and gladiator questions) left me with an odd impression. The "test" you propose in both cases is more or less the same as Rawls' Veil of Ignorance. So at that point I was wondering, if you apply Rawls' procedure to determine what is a preferable social contract, perhaps you're a Rawlsian more than you're a consequentialist. :) BTW, are you familiar with Rawls' objections to (classical) utilitarianism?
Para 8.2 comes across as terribly naive, and "politics has been reduced to math" in particular seems almost designed to cause people to dismiss you. (A nitpick: the links at the end of 8.2 are broken.)
One thing that makes the essay confusing for me is the absence of a clear distinction between the questions "how do I decide what to do next" and "what makes for a desirable set of agreements among a large number of people" - between evaluating the morality of individual actions and choosing a social contract.
Another thing that's left out is the issue of comparing or aggregating happiness, or "utility", across different people. The one place where you touch on it, your response to the "utility monster" argument, does not match my own understanding of how a "utility monster" might be a problem. As I understood it, a "utility monster" isn't someone who is to you as you are to an ant, but someone just like you. They just happen to insist that an ice cream makes them a thousand times happier than they make you, so in all cases where it must be decided which of you should get an ice cream they should always get it.
Your analogy with optical illusions is apt, and it gives a good guideline for evaluating a proposed system of morality: in what cases does the proposed system lead me to change my mind on something that I previously did or avoided doing because of a moral judgement.
Interestingly, though, you give more examples that have to do with the social contract (gun control, birth control, organ donation policy, public funding of art, discrimination, slavery, etc.) than you give examples that have to do with personal decisions (giving to charities, trolley problems).
My own positions are contractarian, much more than they are deontological or consequentialist. I'm generally truthful, not because it is "wrong" to lie or because I have a rule against it (for instance I'm OK with lying in the context of a game, say Diplomacy, where the usual social conventions are known to be suspended - though I'd be careful about hurting others' feelings through my play even in such a context). I don't trust myself to compute the complete consequences of lying vs. not lying in each case, and so a literal consequentialism isn't an option for me.
However, I would prefer to live in a world where people can be relied upon to tell the truth, and for that I am willing to sacrifice the dubious advantage of being able to put a fast one over on other people from time to time. It is "wrong" to lie in the sense that if you didn't know ahead of time what particular position you'd end up occupying in the world (e.g. a politician with power) but only knew some general facts about the world, you would find a contract that banned lying acceptable, and would be willing to let this contract sanction lying with penalties. (At the same time, and for the same reason, I also put some value on privacy: being able to lie by omission about some things.)
I find XiXiDu's remarks interesting. It seems to me that at present something like "might makes right" is descriptively true of us humans: we could describe a morality only in terms of agreements and generally reliable penalties for violating these agreements. "If you injure others, you can expect to be put in prison, because that's the way society is currently set up; so if you're rational, you'll curb your desires to hurt others because your expected utility for doing so is negative".
However this sort of description doesn't help in finding out what the social contract "should" be - it doesn't help us find what agreements we currently have that are wrong because they result from the moral equivalent of optical illusions "fooling us" into believing something that isn't the case.
It also doesn't help us in imagining what the social contract could be if we weren't the sort of beings we are: if the agreements we enter were binding for reasons other than fear of penalties. This is a current limitation of our cognitive architectures but not a necessary one.
(I find this a very exciting question, and at present the only place I've seen where it can even be discussed is LW: what kind of moral philosophy would apply to beings who can "change their own source code".)
EDIT: having read Vladimir_M's reply below, his comments capture much of what I wanted to say, only better.
"Paras 7.2 and 7.3 (the slavery and gladiator questions) left me with an odd impression. The "test" you propose in both cases is more or less the same as Rawls' Veil of Ignorance. So at that point I was wondering, if you apply Rawls' procedure to determine what is a preferable social contract, perhaps you're a Rawlsian more than you're a consequentialist. :) BTW, are you familiar with Rawls' objections to (classical) utilitarianism?"
I can't speak for Yvain but as someone who fully agreed with his use of that test, I would describe mysel...
There are a lot of explanations of consequentialism and utilitarianism out there, but not a lot of persuasive essays trying to convert people. I would like to fill that gap with a pro-consequentialist FAQ. The target audience is people who are intelligent but may not have a strong philosophy background or have thought about this matter too much before (ie it's not intended to solve every single problem or be up to the usual standards of discussion on LW).
I have a draft up at http://www.raikoth.net/consequentialism.html (yes, I have since realized the background is horrible, and changing it is on my list of things to do). Feedback would be appreciated, especially from non-consequentialists and non-philosophers since they're the target audience.