sark comments on Consequentialism FAQ - Less Wrong

20 Post author: Yvain 26 April 2011 01:45AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (117)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_M 27 April 2011 01:03:19AM *  40 points [-]

OK, I've read the whole FAQ. Clearly, a really detailed critique would have to be given at similar length. Therefore, here is just a sketch of the problems I see with your exposition.

For start, you use several invalid examples, or at least controversial examples that you incorrectly present as clear-cut. For example, the phlogiston theory was nothing like the silly strawman you present. It was a falsifiable scientific theory that was abandoned because it was eventually falsified (when it was discovered that burning stuff adds mass due to oxidation, rather than losing mass due to escaped phlogiston). It was certainly a reductionist theory -- it attempted to reduce fire (which itself has different manifestations) and the human and animal metabolism to the same underlying physical process. (Google "Becher-Stahl theory".) Or, at another place, you present the issue of "opposing condoms" as a clear-cut case of "a horrendous decision" from a consequentialist perspective -- although in reality the question is far less clear.

Otherwise, up to Section 4, your argumentation is passable. But then it goes completely off the rails. I'll list just a few main issues:

  • In the discussion of the trolley problem, you present a miserable caricature of the "don't push" arguments. The real reason why pushing the fat main is problematic requires delving into a broader game-theoretic analysis that establishes the Schelling points that hold in interactions between people, including those gravest ones that define unprovoked deadly assault. The reason why any sort of organized society is possible is that you can trust that other people will always respect these Schelling points without regards to any cost-benefit calculations, except perhaps when the alternative to violating them is by orders of magnitude more awful than in the trolley examples. (I have compressed an essay's worth of arguments into a few sentences, but I hope the main point is clear.)

  • In Section 5, you don't even mention the key problem of how utilities are supposed to be compared and aggregated interpersonally. If you cannot address this issue convincingly, the whole edifice crumbles.

  • In Section 6, at first it seems like you get the important point that even if we agree on some aggregate welfare maximization, we have no hope of getting any practical guidelines for action beyond quasi-deontologist heuristics. But they you boldly declare that "we do have procedures in place for breaking the heuristic when we need to." No, we don't. You may think we have them, but what we actually have are either somewhat more finely tuned heuristics that aren't captured by simple first-order formulations (which is good), or rationalizations and other nonsensical arguments couched in terms of a plausible-sounding consequentialist analysis (which is often a recipe for disaster). The law of unintended consequences often bites even in seemingly clear-cut "what could possibly go wrong?" situations.

  • Along similar lines, you note that in any conflict all parties are quick to point out that their natural rights are at stake. Well, guess what. If they just have smart enough advocates, they can also all come up with different consequentialist analyses whose implications favor their interests. Different ways of interpersonal utility comparison are often themselves enough to tilt the scales as you like. Further, these analyses will all by necessity be based on spherical-cow models of the real world, which you can usually engineer to get pretty much any implication you like.

  • Section 7 is rather incoherent. You jump from one case study to another arguing that even when it seems like consequentialism might imply something revolting, that's not really so. Well, if you're ready to bite awful consequentialist bullets like Robin Hanson does, then be explicit about it. Otherwise, clarify where exactly you draw the lines.

  • Since we're already at biting bullets, your FAQ fails to address another crucial issue: it is normal for humans to value the welfare of some people more than others. You clearly value your own welfare and the welfare of your family and friends more than strangers (and even for strangers there are normally multiple circles of diminishing caring). How to reconcile this with global maximization of aggregate utility? Or do you bite the bullet that it's immoral to care about one's own family and friends more than strangers?

  • Question 7.6 is the only one where you give even a passing nod to game-theoretical issues. Considering their fundamental importance in the human social order and all human interactions, and their complex and often counter-intuitive nature, this fact by itself means that most of your discussion is likely to be remote from reality. This is another aspect of the law of unintended consequences that you nonchalantly ignore.

  • Finally, your idea that it is possible to employ economists and statisticians and get accurate and objective consequentialist analysis to guide public policy is altogether utopian. If such things were possible, economic central planning would be a path to prosperity, not the disaster that it is. (That particular consequentialist folly was finally abandoned in the mainstream after it had produced utter disaster in a sizable part of the world, but many currently fashionable ideas about "scientific" management of government and society suffer from similar delusions.)

Comment author: sark 29 April 2011 05:05:47PM 0 points [-]

I have compressed an essay's worth of arguments into a few sentences, but I hope the main point is clear.

I unfortunately don't get the main point :(

Could you elaborate on or at least provide a reference for how a consideration of Schelling points would suggest that we shouldn't push the fat man?

Comment author: Vladimir_M 29 April 2011 09:06:40PM *  16 points [-]

This essay by David Friedman is probably the best treatment of the subject of Schelling points in human relations:
http://www.daviddfriedman.com/Academic/Property/Property.html

Applying these insights to the fat man/trolley problem, we see that the horrible thing about pushing the man is that it transgresses the gravest and most terrible Schelling point of all: the one that defines unprovoked deadly assault, whose violation is understood to give the other party the licence to kill the violator in self-defense. Normally, humans see such crucial Schelling points as sacrosanct. They are considered violable, if at all, only if the consequentialist scales are loaded to a far more extreme degree than in the common trolley problem formulations. Even in the latter case, the act will likely cause serious psychological damage. This is probably an artifact of additional commitment not to violate them, which may also be a safeguard against rationalizations.

Now, the utilitarian may reply that this is just human bias, an unfortunate artifact of evolutionary psychology, and we’d all be better off if people instead made decisions according to pure utilitarian calculus. However, even ignoring all the other fatal problems of utilitarianism, this view is utterly myopic. Humans are able to coordinate and cooperate because we pay respect to the Schelling points (almost) no matter what, and we can trust that others will also do so. If this were not so, you would have to be constantly alert that anyone might rob, kill, cheat, or injure you at any moment because their cost-benefit calculations have implied doing so, even if these calcualtions were in terms of the most idealistic altruistic utilitarianism. Clearly, no organized society could exist in that case: even if with unlimited computational power and perfect strategic insight you could compute that cooperation is viable, this would clearly be impractical.

It is however possible in practice for humans to evaluate each other’s personalities and figure out if others’, so to say, decision algorithms follow these constraints. Think of how people react when they realize that someone has a criminal history or sociopathic tendencies. This person is immediately perceived as creepy and dangerous, and with good reason: people realize that his decision algorithm lacks respect for the conventional Schelling points, so that normal trust and relaxed cooperation with him is impossible, and one must be on the lookout for nasty surprises. Similarly, imagine meeting someone who was in the fat man/trolley situation and who mechanically made the utilitarian decision and pushed the man without a twitch of guilt. Even the most zealous utilitarian will in practice be creeped out by such a person, even though he should theoretically perceive him as an admirable hero. (As always when it comes to ideology, people may be big on words but usually know better when their own welfare is at stake.)

(This comment is also cursory and simplified, and an alert reader will likely catch multiple imprecisions and oversimplifications. This is unfortunately unavoidable because of the complexity of the topic. However, the main point stands regardless. In particular, I haven’t addressed the all too common cases where cooperation between people breaks down and all sorts of conflict ensue. But this analysis would just reinforce the main point that cooperation critically depends on mutual recognition of near-unconditional respect for Schelling points.)

Comment author: sark 29 April 2011 11:42:39PM 1 point [-]

Thanks! That makes sense.

Comment author: utilitymonster 30 April 2011 08:51:42AM 0 points [-]

Can you explain why this analysis renders directing away from the five and toward the one permissible?

Comment author: Vladimir_M 01 May 2011 12:47:55AM *  10 points [-]

The switch example is more difficult to analyze in terms of the intuitions it evokes. I would guess that the principle of double effect captures an important aspect of what's going on, though I'm not sure how exactly. I don't claim to have anything close to a complete theory of human moral intuitions.

In any case, the fact that someone who flipped the switch appears much less (if at all) bad compared to someone who pushed the fat man does suggest strongly that there is some important game-theoretic issue involved, or otherwise we probably wouldn't have evolved such an intuition (either culturally or genetically). In my view, this should be the starting point for studying these problems, with humble recognition that we are still largely ignorant about how humans actually manage to cooperate and coordinate their actions, instead of naive scoffing at how supposedly innumerate and inconsistent our intuitions are.