Yvain comments on My main problem with utilitarianism - Less Wrong

-2 Post author: taw 17 April 2009 08:26PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (84)

You are viewing a single comment's thread.

Comment author: Yvain 17 April 2009 09:24:47PM *  3 points [-]
  1. It seems that it is possible to compare happiness of two different people; ie I can say that giving the cake to Mary would give her twice as much happiness as it would give Fred. I think that's all you need to counter your first objection. You'd need something much more formal if you were actually trying to calculate it out rather than use it as a principle, but as far as I know no one does this.

  2. This is a big problem. I personally solve it by not using utilitarianism on situations that create or remove people. This is an inelegant hack, but it works.

  3. This is why I said I am a descriptive emotivist but a normative utilitarian. The fact that people don't act in accordance with a system doesn't mean the system isn't moral. I'd be pretty dubious of any moral system that said people were actually doing everything right.

  4. Yeah, tell me about it. Right now I'm thinking that a perfectly rational person has no essential discounts, but ends up with a very hefty discount because she can't make future plans with high effectiveness. For example, investing all my money now and donating the sum+interest to charity in a thousand years only works if I'm sure both the banking system and human suffering will last a millennium.

"Utilons don't make people happier" is a weird way of putting things. It sounds to me a lot like "meters don't make something longer." If you're adding meters to something, and it's not getting longer, you're using the word "meter" wrong.

I don't know much about academic consequentialism, but I'd be surprised if someone hadn't come up with the idea of the utilon x second, ie adding a time dimension and trying to maximize utilon x seconds. If giving someone a new car only makes them happier for the first few weeks, then that only provides so many utilon x seconds. If getting married makes you happier for the rest of your life, well, that provides more utilon x seconds. If you want to know whether you should invest your effort in getting people more cars or getting them into relationships, you'll want to take that into account.

Probably an intelligent theory of utilon x seconds would end up looking completely different from modern consumer culture. Probably anyone who applied it would also be much much happier than a modern consumer. If people can't calculate what does and doesn't provide them with utilon x seconds, they either need to learn to do so, ask someone who has learned to do so to help manage their life, or resign themselves to being less than maximally happy.

I have a feeling this is very different from the way economists think about utility, but that's not necessarily a bad thing.

Comment author: steven0461 17 April 2009 09:38:02PM 2 points [-]

This is confusing the issue. Utility, which is an abstract thing measuring preference satisfaction, is not the same thing as happiness, which is a psychological state.

Comment author: mattnewport 17 April 2009 09:46:03PM 3 points [-]

It's a pretty universal confusion. Many people when asked what they want out of life will say something like 'to be happy'. I suspect that they do not exactly mean 'to be permanently in the psychological state we call happiness' though, but something more like, 'to satisfy my preferences, which includes, but is not identical with, being in the psychological state of happiness more often than not'. I actually think a lot of ethics gets itself tied up in knots because we don't really understand what we mean when we say we want to be happy.

Comment author: Eliezer_Yudkowsky 17 April 2009 10:01:05PM 0 points [-]

True, but even so, thinking about utilon-seconds probably does steer your thoughts in a different direction from thinking about utility.

Comment author: steven0461 17 April 2009 10:04:36PM 2 points [-]

So let's call them hedon-seconds instead.

Comment author: Yvain 17 April 2009 10:29:34PM 1 point [-]

The terminology here is kind of catching me in between a rock and a hard place.

My entire point is that the "utility" of "utilitarianism" might need more complexity than the "utility" of economics, because if someone thinks they prefer a new toaster but they actually wouldn't be any happier with it, I don't place any importance on getting them a new toaster. IANAEBAFAIK economists' utility either would get them the new toaster or doesn't really consider this problem.

...but I also am afraid of straight out saying "Happiness!", because if you do that you're vulnerable to wireheading. Especially with a word like "hedon" which sounds like "hedonism", which is very different from the "happiness" I want to talk about.

CEV might help here, but I do need to think about it more.

Comment author: Matt_Simpson 18 April 2009 05:03:35AM *  1 point [-]

My entire point is that the "utility" of "utilitarianism" might need more complexity than the "utility" of economics, because if someone thinks they prefer a new toaster but they actually wouldn't be any happier with it, I don't place any importance on getting them a new toaster. IANAEBAFAIK economists' utility either would get them the new toaster or doesn't really consider this problem.

Agreed. For clarity, the economist's utility is just preference sets, but these aren't stable. Morality's utility is what those preference sets would look like if they reflected what we would actually value, given that we take everything into account. I.e., Eliezer's big computation. Utilitarianism's utility, in the sense that Eliezer is a utilitarian, is the terms of the implied utility function we have (i.e., the big computation) that refers to the utility functions of other agents.

Using "utility" to refer to all of these things is confusing. I choose to call economist's utility functions preference sets, for clarity. And, thus, economic actors maximize preferences, but not necessarily utility. Perhaps utilitarianism's utility - the terms in our utility function for the values of other people - can be called altruistic utility, again, for clarity.

ETA: and happiness I use to refer to a psychological state - a feeling. Happiness, then, is nice, but I don't want to be happy unless it's appropriate to be happy. Your mileage may vary with this terminology, but it helps me keep things straight.

Comment author: Nick_Tarleton 17 April 2009 11:21:33PM *  1 point [-]

the "utility" of "utilitarianism" might need more complexity than the "utility" of economics

My rough impression is that "utilitarianism" is generally taken to mean either hedonistic or preference utilitarianism, but nothing else, and that we should be saying "consequentialism".

CEV might help here, but I do need to think about it more.

I think the "big computation" perspective in The Meaning of Right is sufficient.

Or if you're just looking for a term to use instead of "utility" or "happiness", how about "goodness" or "the good"? (Edit: "value", as steven suggests, is better.)

Comment author: steven0461 17 April 2009 11:25:42PM 0 points [-]

My rough impression is that "utilitarianism" is generally taken to mean either hedonistic or preference utilitarianism, but nothing else, and that we should be saying "consequentialism".

My impression is that it doesn't need to be pleasure or preference satisfaction; it can be anything that could be seen as "quality of life" or having one's true "interests" satisfied.

Or if you're just looking for a term to replace "utility", how about "goodness" or "the good"?

Or "value".

Comment author: steven0461 17 April 2009 10:35:32PM 0 points [-]

I agree we should care about more than people's economic utility and more than people's pleasure.

"eudaimon-seconds", maybe?

Comment author: Nick_Tarleton 17 April 2009 11:10:05PM 0 points [-]

I don't know much about academic consequentialism, but I'd be surprised if someone hadn't come up with the idea of the utilon x second, ie adding a time dimension and trying to maximize utilon x seconds. If giving someone a new car only makes them happier for the first few weeks, then that only provides so many utilon x seconds. If getting married makes you happier for the rest of your life, well, that provides more utilon x seconds. If you want to know whether you should invest your effort in getting people more cars or getting them into relationships, you'll want to take that into account.

This is one reason I say my notional utility function is defined over 4D histories of the entire universe, not any smaller structures like people.