mattnewport comments on The Preference Utilitarian’s Time Inconsistency Problem - Less Wrong

25 Post author: Wei_Dai 15 January 2010 12:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (104)

You are viewing a single comment's thread.

Comment author: mattnewport 15 January 2010 01:59:52AM 7 points [-]

Given that most attempts at thinking through the consequences of utilitarian ethics resemble a proof by contradiction that utilitarianism cannot be a good basis for ethics it surprises me how many people continue to embrace it and try to fix it.

Comment author: magfrump 15 January 2010 05:59:00PM 1 point [-]

Can you provide a link to an academic paper or blog post that discusses this in more depth?

Comment author: Jack 16 January 2010 06:40:00AM *  1 point [-]

The kind of thought experiments (I think) Matt is referring to are so basic I don't know of any papers that go into them in depth. They get discussed in intro level ethics courses. For example: A white woman is raped and murdered in segregation era deep south. Witnesses say the culprit was black. Tensions are high and there is a high likelihood race riots break out and whites just start killing blacks. Hundreds will die unless the culprit is found and convicted quickly. There are no leads but as police chief/attorney/governor you can frame an innocent man to charge and convict quickly. Both sum and average utilitarianism suggest you should.

Same goes for pushing fat people in front of runaway trolleys and carving up homeless people for their organs.

Utilitarianism means biting all these bullets or else accepting these as proofs by reductio.

Edit: Or structuring/defining utilitarianism in a way that avoids these issues. But it is harder than it looks.

Comment author: ciphergoth 16 January 2010 11:23:38AM 3 points [-]

Or seeing the larger consequences of any of these courses of action.

(Well, except for pushing the fat man in front of the trolley, which I largely favour.)

Comment author: Jack 16 January 2010 06:09:08PM 1 point [-]

I'm comfortable positing things about these scenarios such that there are no larger consequences of these courses of action- no one finds out, no norms are set etc.

I do suspect an unusually high number of people here will want to bite the bullet.(Interesting side effect of making philosophical thought experiments hilarious: it can be hard to tell if someone is kidding about them) But it seems well worth keeping in mind that the vast majority would find a world governed by the typical forms of utilitarianism to be highly immoral.

Comment author: ciphergoth 16 January 2010 07:24:31PM 5 points [-]

These are not realistic scenarios as painted. In order to be able to actually imagine what really might be the right thing to do if a scenario fitting these very alien conditions arose, you'll have to paint a lot more of the picture, and it might leave our intuitions about what was right in that scenario looking very different.

Comment author: Jack 18 January 2010 02:53:54AM 1 point [-]

They're not realistic because they're designed to isolate the relevant intuitions from the noise. Being suspicious of our intuitions about fictional scenarios is fine- but I don't think that lets you get away without updating. These scenarios are easy to generate and have several features in common. I don't expect anyone to give up their utilitarianism on the basis of the above comment-- but a little more skepticism would be good.

Comment author: ciphergoth 18 January 2010 08:30:20AM *  4 points [-]

I'm happy to accept whatever trolley problem you care to suggest. Those are artificial but there's no conceptual problem with setting them up in today's world - you just put the actors and rails and levers in the right places and you're set. But to set up a situation where hundreds will die in this possible riot, and yet it it certain that no-one will find out and no norms will be set if you frame the guy - that's just no longer a problem set in a world anything like our world, and I'd need to know a lot more about this weird proposed world before I was prepared to say what the right thing to do in it might be.

Comment author: magfrump 17 January 2010 05:00:28AM 0 points [-]

To the extent that I have been exposed to these types of situations, it seems that the contradictions stem from contrived circumstances. I've also never had a simple and consistent deontological system lined out for me that didn't suffer the same flaws.

So I guess what I'm really getting at is that I see utilitarianism as a good heuristic for matching up circumstances with judgments that "feel right" and I'm curious if/why OP thinks the heuristic is bad.

Comment author: Jack 17 January 2010 06:11:30PM 0 points [-]

To the extent that I have been exposed to these types of situations, it seems that the contradictions stem from contrived circumstances.

Not sure what this means.

I've also never had a simple and consistent deontological system lined out for me that didn't suffer the same flaws.

Nor have I. My guess is that simple and consistent is too much to ask of any moral theory.

So I guess what I'm really getting at is that I see utilitarianism as a good heuristic for matching up circumstances with judgments that "feel right" and I'm curious if/why OP thinks the heuristic is bad.

It is definitely a nice heuristic. I don't know what OP thinks but a lot of people here take it to be the answer, instead of just a heuristic. That may be the target of the objection.

Comment author: magfrump 18 January 2010 07:15:00PM 0 points [-]

"Exposed to these situations" means to say that when someone asks about utilitarianism they say, "if there was a fat man in front of a train filled with single parents and you could push him out of the way or let the train run off a cliff what would you do?" To which my reply is, "When does that ever happen and how does answering that question help me be more ethical?"

Digression: if a decision-theoretic model was translated into a set of axiomatic behaviors could you potentially apply Godel's Incompleteness Theorem to prove that simple and consistent is in fact too much to ask?

Comment author: orthonormal 18 January 2010 08:18:19PM *  7 points [-]

Please don't throw around Gödel's Theorem before you've really understood it— that's one thing that makes people look like cranks!

"When does that ever happen and how does answering that question help me be more ethical?"

Very rarely; but pondering such hypotheticals has helped me to see what some of my actual moral intuitions are, once they are stripped of rationalizations (and chances to dodge the question). From that point on, I can reflect on them more effectively.

Comment author: magfrump 19 January 2010 04:33:58PM 1 point [-]

Sorry to sound crankish. Rather than "simple and inconsistent" I might have said that there were contrived and thus unanswerable questions. Regardless it distracted and I shouldn't have digressed at all.

Anyway thank you for the good answer concerning hypotheticals.

Comment author: Jack 19 January 2010 02:38:56AM 7 points [-]

"Exposed to these situations" means to say that when someone asks about utilitarianism they say, "if there was a fat man in front of a train filled with single parents and you could push him out of the way or let the train run off a cliff what would you do?" To which my reply is, "When does that ever happen and how does answering that question help me be more ethical?"

These thought experiments aren't supposed to make you more ethical, they're supposed to help us understand our morality. If you think there are regularities in ethics- general rules that apply to multiple situations then it helps to concoct scenarios to see how those rules function. Often they're contrived because they are experiments, set up to see how the introduction of a moral principle affects our intuitions. In natural science experimental conditions usually have to be concocted as well. You don't usually find two population groups for whom everything is the same except for one variable, for example.

Digression: if a decision-theoretic model was translated into a set of axiomatic behaviors could you potentially apply Godel's Incompleteness Theorem to prove that simple and consistent is in fact too much to ask?

Agree with orthonormal. Not sure what this would mean. I don't think Godel even does that for arithmetic-- arithmetic is simple (though not trivial) and consistent, it just isn't complete. I have no idea if ethics could be a complete axiomatic system, I haven't done much on completeness beyond predicate calculus and Godel is still a little over my head.

I just mean that any simple set of principles will have to be applied inconsistently to match our intuitions. This, on moral particularism, is relevant.

Comment author: magfrump 19 January 2010 04:37:34PM 0 points [-]

I didn't use "consistence" very rigorously here, I more meant that even if a principle matched our intuitions there would be unanswerable questions.

Regardless, good answer. The link seems to be broken for me, though.

Comment author: Jack 19 January 2010 06:19:04PM 0 points [-]

Link is working fine for me. It is also the first google result for "moral particularism", so you can get there that way.

Comment author: magfrump 20 January 2010 01:25:45AM 0 points [-]

Tried that and it gave me the same broken site. It works now.

Comment author: Nick_Tarleton 19 January 2010 03:01:32AM 0 points [-]

Why on Earth was this downvoted?

Comment author: Nick_Tarleton 15 January 2010 03:33:59AM *  0 points [-]

By "utilitarianism" do you mean any system maximizing expected utility over outcomes, or the subset of such systems that sum/average across persons?

Comment author: mattnewport 15 January 2010 05:48:49AM 2 points [-]

The latter, I don't think it makes much sense to call the former an ethical system, it's just a description of how to make optimal decisions.

Comment author: timtyler 15 January 2010 06:30:32PM 0 points [-]

This post does have "preference utilitarianism" in its title.

http://en.wikipedia.org/wiki/Preference_utilitarianism

Comment author: mattnewport 15 January 2010 07:01:04PM 2 points [-]

As far as I can tell from the minimal information in that link, preference utilitarianism still involves summing/averaging/weighting utility across all persons. The 'preference' part of 'preference utilitarianism' refers to the fact that it is people's 'preferences' that determine their individual utility but the 'utilitarianism' part still implies summing/averaging/weighting across persons. The link mentions Peter Singer as the leading contemporary advocate of preference utilitarianism and as I understand it he is still a utilitarian in that sense.

'Maximizing expected utility over outcomes' is just a description of how to make optimal decisions given a utility function. It is agnostic about what that utility function should be. Utilitarianism as a moral/ethical philosophy generally seems to advocate a choice of utility function that uses a unique weighting across all individuals as the definition of what is morally/ethically 'right'.

Comment author: timtyler 15 January 2010 10:41:10PM 1 point [-]

You could be right. I can't see mention of "averaging" or "summing" in the definitions (which! it matters!) - and if any sum is to be performed it is vague about what class of entities is being summed over. However - as you say - Singer is a "sum" enthusiast. How you can measure "satisfaction" in a way that can be added up over multiple people is left as a mystery for readers.

I wouldn't assert the second paragraph, though. Satisfying preferences is still a moral philosophy - regardless of whether those preferences belong to an individual agent, or whether preference satisfaction is summed over a group.

Both concepts equally allow for agents with arbitrary preferences.

Comment author: mattnewport 15 January 2010 11:08:32PM 0 points [-]

The main Wikipedia entry for Utilitarianism says:

Utilitarianism is the idea that the moral worth of an action is determined solely by its utility in providing happiness or pleasure as summed among all people. It is thus a form of consequentialism, meaning that the moral worth of an action is determined by its outcome.

Utilitarianism is often described by the phrase "the greatest good for the greatest number of people", and is also known as "the greatest happiness principle". Utility, the good to be maximized, has been defined by various thinkers as happiness or pleasure (versus suffering or pain), although preference utilitarians define it as the satisfaction of preferences.

Where 'preference utilitarians' links back to the short page on preference utilitarianism you referenced. That combined with the description of Peter Singer as the most prominent advocate for preference utilitarianism suggests weighted summing or averaging, though I'm not clear whether there is some specific procedure associated with 'preference utilitarianism'.

Merely satisfying your own preferences is a moral philosophy but it's not utilitarianism. Ethical Egoism maybe or just hedonism. What appears to distinguish utilitarian ethics is that they propose a unique utility function that globally defines what is moral/ethical for all agents.

Comment author: timtyler 15 January 2010 06:27:52PM 0 points [-]

It seems like a historical tragedy that a perfectly sensible word was ever given the second esoteric meaning.