SaidAchmiz comments on Humans are utility monsters - Less Wrong

67 Post author: PhilGoetz 16 August 2013 09:05PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (213)

You are viewing a single comment's thread.

Comment author: SaidAchmiz 16 August 2013 11:30:19PM 12 points [-]

So here's a question for anyone who thinks the concept of a utility monster is coherent and/or plausible:

The utility monster allegedly derives more utility from whatever than whoever else, or doesn't experience any diminishing returns, etc. etc.

Those are all facts about the utility monster's utility function.

But why should that affect the value of the utility monster's term in my utility function?

In other words: granting that the utility monster experiences arbitrarily large amounts of utility (and granting the even more problematic thesis that experienced utility is intersubjectively comparable)... why should I care?

Comment author: TsviBT 17 August 2013 01:27:23AM 31 points [-]

I always automatically interpret the utility monster as an entity that somehow can be in a state that is more highly valued under my utility function than, say, a billion other humans put together.

But then the monster isn't a problem, because if there were in fact such an entity, I would indeed actually want to sacrifice a billion other humans to make the monster happy. This is true by definition.

Comment author: SaidAchmiz 17 August 2013 02:23:26AM 15 points [-]

I always automatically interpret the utility monster as an entity that somehow can be in a state that is more highly valued under my utility function than, say, a billion other humans put together.

That's easy. For most people (in general; I don't mean here on lesswrong), this just describes one's family (and/or close friends)... not to mention themselves!

I mean, I don't know exactly how many random people's lives, in e.g. Indonesia, would have to be at stake for me to sacrifice my mother's life to save them, but it'd be more than one. Maybe a lot more.

A billion? I don't know that I'd go that far. But some people might.

Comment author: TsviBT 17 August 2013 10:10:22AM 6 points [-]

Well, whether you really want (in the extrapolated volition sense) to sacrifice 10^{whatever} lives to save your family is a whole big calculation involving interpersonal morality, bounded rationality/virtue ethics, TDT/game theory, etc. The point that I was echoing is that if you really would want to make that trade, there's nothing monstery about your family - you just {love them that much}/{love others that little}. The utility monster is an objection to the social morality theory called "utilitarianism"; the utility monster becomes gibberish when phrased as an objection to "any set of preferences can in principal be completely specified by a utility function, to be handed to a generic decision process, resulting in optimal decision making". Like, "Oh no, oh no, I found this monster, and it is soooo soooo good to feed it humans! It is even more better every time I feed it another human! Woe is me! Goooood!!".

Now, the utility monster makes perfect sense as an objection to humans actually making decisions purely using explicit quantitative expected utility calculations. But that doesn't say anything about utility as a formalized version of "good". Rather, that's some sort of comment about the capricious quality of bounded reasoning under uncertainty - you always worry about strong conclusions that make you do particularly effective things, because a mistake in your calculations means you are doing particularly effective bad things. One particular sort of dangerously strong conclusion would be concluding that, e.g., the marginal utility of {UMonster eating an additional human} is larger than and grows faster than the marginal utility of {another humans gets eaten alive}.

Comment author: Eliezer_Yudkowsky 17 August 2013 08:12:56AM 9 points [-]

To continue the argument: It could be a problem if you'd want to protect the utility monster once it exists, but would prefer that the utility monster not exist. For example it could be an innocent being who experiences unimaginable suffering when not given five dollars.

Comment author: byrnema 18 August 2013 03:45:18AM 12 points [-]

Our oldest utility monster is eight years old. (Did you have this example specifically in mind? Seems to fit the description very well.)

Comment author: [deleted] 19 August 2013 05:53:51PM *  2 points [-]

If you prefer a happy monster to no monster and no monster to a sad monster, then you prefer a happy monster to a sad monster, and TsviBT's point applies.

Whereas if you prefer no monster to a happy monster to a sad monster, why don't you kill the monster?

Comment author: Eliezer_Yudkowsky 19 August 2013 08:45:00PM 17 points [-]

...sometimes I wonder about the people who find it unintuitive to consider that "Killing X, once X is alive and asking not to be killed" and "Preferring that X not be born, if we have that option in advance" could have widely different utility to me. The converse perspective implies that we should either (1) be spawning as many babies as possible, as fast as possible, or (2) anyone who disagrees with 1 should go on a murder spree, or at best consider such murder sprees ethically unimportant. After all, not spawning babies as fast as possible is as bad as murdering that many existent adults, apparently.

Comment author: Lukas_Gloor 20 August 2013 12:40:10AM *  8 points [-]

The crucial question is how we want to value the creation of new sentience (aka population ethics). It has been proven impossible to come up with intuitive solutions to it, i.e. solutions that fit some seemingly very conservative adequacy conditions.

The view you outline as an alternative to total hedonistic utilitarianism is often left underdetermined, which hides some underlying difficulties.

In Practical Ethics, Peter Singer advocated a position he called "prior-existence preference utilitarianism". He considered it wrong to kill existing people, but not wrong to not create new people as long as their lives would be worth living. This position is awkward because it leaves you no way of saying that a very happy life (one where almost all preferences are going to be fulfilled) is better than a merely decent life that is worth living. If it were better, and if the latter is equal to non-creation, then denying that the creation of the former life is preferable over non-existence would lead to intransitivity.

If I prefer, but only to a very tiny degree, having a child with a decent life over having one with an awesome life, would it be better if I had the child with the decent life?

In addition, nearly everyone would consider it bad to create lives that are miserable. But if the good parts of a decent life can make up for the bad parts in it, why doesn't a life consisting solely of good parts constitute something that is important to create? (This point applies most forcefully for those who adhere to a reductionist/dissolved view on personal identity.)

One way out of the dilemma is what Singer called the "moral ledger model of preferences". He proposed an analogy between preferences and debts. It is good if existing debts are paid, but there is nothing good about creating new debts just so they can be paid later. In fact, debts are potentially bad because they may remain unfulfilled, so all things being equal, we should try to avoid making debts. The creation of new sentience (in form of "preference-bundles" or newly created utility functions) would, according to this view, be at most neutral (if all the preferences will be perfectly fulfilled), and otherwise negative to the extent that preferences get frustrated.

Singer himself rejected this view because it would imply voluntary human extinction being a good outcome. However, something about the "prior-existence" alternative he offered seems obviously flawed, which is arguably a much bigger problem than something being counterintuitive.

Comment author: ESRogs 23 August 2013 03:07:28AM 0 points [-]

not wrong to not create new people as long as their lives would be worth living

Did you mean to write, "not wrong to create new people..." ?

Comment author: somervta 23 August 2013 07:35:15AM 0 points [-]

No, that's Singer's position. He's saying there is no obligation to create new people.

Comment author: ESRogs 23 August 2013 12:50:39PM 0 points [-]

Then what's the qualifier about their lives being worth living there for? Presumably he believes it's also not wrong to not create people whose lives would not be worth living, right?

Comment author: somervta 23 August 2013 01:27:45PM 1 point [-]

Huh. Rereading it, your interpretation might make more sense. I was thinking about that as 'even if their lives would be worth living, you don't have an obligation to create new people', which is a position that Peter Singer holds, but so is the position expressed after your correction.

Comment author: Ghatanathoah 23 August 2013 09:18:05PM *  0 points [-]

In my view population ethics failed at the start by making a false assumption, namely "Personal identity does not matter, all that matters is the total amount of whatever makes life worth living (ie utility)." I believe this assumption is wrong.

Derek Parfit first made this assumption when discussing the Nonidentity Problem. He believed it was the most plausible solution, but was disturbed by its other implications, like the Repugnant Conclusion. His work is what spawned most of the further debate on population ethics and its disturbing conclusions.

After meditating on the Nonidentity Problem for a while I realized Parfit's proposed solution had a major problem. In the traditional form of the NIP you are given a choice between two individuals who have different capabilities for utility generation (one is injured in utero, the other is not). However, there is another way to change the amount of utility someone gets out of life besides increasing or reducing their capabilities. You could also change the content of their preferences, so that a person has more ambitious preferences that are harder to achieve.

I reframed the NIP as giving a choice between having two children with equal capabilities (intelligence, able-bodiedness, etc.) but with different ambitions, one wanted to be a great scientist or artist, while the other just wanted to do heroin all day. It seemed obvious to me, and to most of the people I discussed this with, that it was better to have the ambitious child, even if the druggie had a greater level of lifetime utility.

In my view the primary thing that determines whether someone's creation is good or not is their identity (ie, what sort of preferences they have, their personality, etc). What constitutes someone having a "morally right" identity is really complicated and fragile, but generally it means that they have the sort of rich, complex values that humans have, and that they are (in certain ways) unique and different from the people who have come before. In addition to their internal desires, their relationship to other people is also important. (Of course, this only applies if their total lifetime utility is positive, if it's negative it's bad to create them no matter what their identity is).

We can now use this to patch Singer's "Moral Ledger" in a way that fits Eliezer's views. Creating someone with the "wrong" identity is a debt, but creating a person with a "right" identity is not. So we shouldn't create a utility monster (if "utility monster" is a "wrong" identity), because that would create a debt, but killing the monster wouldn't solve anything, it would just make it impossible to pay the debt.

My "Identity Matters" model also helps explain our intuitions about our duties to have children. In the total and average views, the identity of the child is unimportant. In my model it is. If someone doesn't want to have children, having an unwanted child is a "debt" regardless of the child's personal utility. A child born to parents who want to have one, by contrast may be "right" to have, even if its utility is lower than that of the aforementioned unwanted child. (Of course, this model needs to be flexible about what makes someone "your child" in order to regard things like sterile parents adopting unwanted children as positive, but I don't see this as a major problem).

In addition to identity mattering, we also seem to have ideals about how utility should be concentrated. Most people intuitively reject things like Replaceability and the Repugnant Conclusion, and I think they're right to. We seem to have an ideal that a small population with high per-person utility is better than a large one with low per-person utility, even if its total utility is higher. I'm not suggesting Average Utilitarianism, as I said in another comment, I think that AU is a disastrously bad attempt to mathematize that ideal. But I do think that ideal is worthwhile, we just need a less awful way to fit it into our ethical system.

A third reason for our belief that having children is optional is that most people seem to believe in some sort of Critical Level Utilitarianism with the critical level changing depending on what our capabilities for increasing people's utility are. Most people in the modern world would consider it unthinkable to have a child whose level of utility would have been considered normal in Medieval Europe. And I think this belief isn't just the status quo bias, I would also consider it unconscionable to have a child with normal Modern World levels of utility in a transhuman future.

Comment author: ygert 23 August 2013 09:50:12PM 0 points [-]

It seemed obvious to me, and to most of the people I discussed this with, that it was better to have the ambitious child, even if the druggie had a greater level of lifetime utility.

Oh? Yes, true it is better to have the ambitious child. I agree and I think most others will too. But I don't think that's because of some fundamental preference, but rather because the ambitious child has a far greater chance of causing good in the world. (Say, becoming an artist and painting masterpieces that will be admired for centuries to come, or becoming a scientist and developing our understanding of the fundamental nature of the universe.) The druggie will not provide these positive externalities, and may even provide negative ones. (Say, turning to crime in order to feed his addiction, as some druggies do.)

I think this adequately explains this reaction, and I do not see a need to posit a fundamental term in our utility functions to explain it.

Comment author: Ghatanathoah 26 August 2013 03:30:47AM -1 points [-]

I think this adequately explains this reaction, and I do not see a need to posit a fundamental term in our utility functions to explain it.

I disagree. I have come to realize that that morality isn't just about maximizing utility, it's also about protecting fragile human* values. Creating creatures that have values fundamentally opposed to those values, such as paperclip maximizers, orgasmium, or sociopaths, seems a morally wrong thing to do to me.

This was driven home to me by a common criticism of utilitarianism, namely that it advocates that, if possible, we should kill everyone and replace them with creatures whose preferences are easier to satisfy, or who are easier to make happy. I believe this is a bug, not a feature, and that valuing the identity of created creatures is the solution. Eliezer's essays on the fragility and complexity of human values also helped me realize this.

*When I say "human" I mean any creature with a sufficiently humanlike mind, regardless of whether it is biologically human or not.

Comment author: ygert 26 August 2013 09:33:20PM *  1 point [-]

Perhaps I was unclear. I used utilitarian terminology, but utilitarianism is not necessary for my point. To restate: If I could choose between an ambitious child being born, or a druggie child being born, I (and you, according to your above comment) would choose the ambitious child, all else being equal. Why would we choose that? Well, there are several possible explanations, including the one which you gave. However, yours was complicated and far from trivially true, and so I point out that such massive suppositions are unnecessary, as we already have a certain well known human desire to explain that choice. (Call that desire what you will, perhaps "altruism", or "bettering the world". It's the desire that on the margin, more art, knowledge, and other things-considered-valuable-to-us are created.)

Comment author: Lukas_Gloor 20 August 2013 12:40:30AM *  0 points [-]

Average utilitarianism (which can be both hedonistic or about preferences / utility functions) is another way to avoid the repugnant conclusion. However, average utilitarianism comes with its own conclusions that most consider to be unacceptable. If the average life in the universe turns out to be absolutely miserable, is it a good thing if I bring a child into existence that will have a life that goes slightly less miserable? Or similarly, if the average life is free of suffering and full of the most intense happiness possible, would I be acting catastrophically wrong if I brought into existence a lot of beings that constantly experience the peak of current human happiness (without ever having preferences unfulfilled too), simply because it would lower the overall average?

Another point to bring up against average utilitarianism is that is seems odd why the value of creating a new life should depend on what the rest of the universe looks like. All the conscious experiences remain the same, after all, so where does this "let's just take the average!" come from?

Comment author: [deleted] 20 August 2013 10:19:04PM 2 points [-]

If the average life in the universe turns out to be absolutely miserable, is it a good thing if I bring a child into existence that will have a life that goes slightly less miserable? Or similarly, if the average life is free of suffering and full of the most intense happiness possible, would I be acting catastrophically wrong if I brought into existence a lot of beings that constantly experience the peak of current human happiness (without ever having preferences unfulfilled too), simply because it would lower the overall average?

More repugnant than that is that naive average utilitarianism would seem to say that killing the least happy person in the world is a good thing, no matter how happy they are.

Comment author: Ghatanathoah 23 August 2013 07:40:52PM *  2 points [-]

More repugnant than that is that naive average utilitarianism would seem to say that killing the least happy person in the world is a good thing, no matter how happy they are.

This can be resolved by taking a timeless view of the population, so that someone still counts as part of the average even after they die. This neatly resolves the question you asked Eliezer earlier in the thread, "If you prefer no monster to a happy monster why don't you kill the monster." The answer is that once the monster is created it always exists in a timeless sense. The only way for there to be "no monster" is for it to never exist in the first place.

That still leaves the most repugnant conclusion of naive average utilitarianism, namely that it states that, if the average utility is ultranegative (i.e., everyone is tortured 24/7), creating someone with slightly less negative utility (ie they are tortured 23/7) is better than creating nobody.

In my view average utilitarianism is a failed attempt to capture a basic intuition, namely that a small population of high utility people is sometimes better than a large one of low utility people, even if the large population's total utility is higher. "Take the average utility of the population" sounds like an easy and mathematically rigorous to express that intuition at first, but runs into problems once you figure out "munchkin" ways to manipulate the average, like adding moderately miserable people to a super-miserable world..

In my view we should keep the basic intuition (especially the timeless interpreation of it), but figure out a way to express it that isn't as horrible as AU.

Comment author: [deleted] 23 August 2013 10:05:49PM 1 point [-]

This can be resolved by taking a timeless view of the population, so that someone still counts as part of the average even after they die.

In that view, does someone already counts as part of the average even before they are born?

Comment author: teageegeepea 25 August 2013 03:34:39AM 0 points [-]

If I kill someone in their sleep so they don't experience death, and nobody else is affected by it (maybe it's a hobo or something), is that okay under the timeless view because their prior utility still "counts"?

Comment author: CronoDAS 22 August 2013 03:24:40PM 1 point [-]

More repugnant than that is that naive average utilitarianism would seem to say that killing the least happy person in the world is a good thing, no matter how happy they are.

In real life, this would tend to make the remaining people less happy.

Comment author: [deleted] 19 August 2013 09:47:39PM *  2 points [-]

In the case of actual human children in an actual society, there are considerations that don't necessarily apply to hypothetical alien five-dollar-bill-satisficers in a vacuum.

Comment author: selylindi 28 August 2013 05:22:33PM *  1 point [-]

Perhaps you and they are just focusing on different stages of reasoning. The difference in utility that you've described is a temporal asymmetry that sure looks at first glance like a flaw. But that's because it's an unnecessary complexity to add it as a root principle when explaining morality up to now. Each of us desires not to be a victim of murder sprees (when there are too many people) or to have to care for dozens of babies (when there are too few people), and the simplest way for a group of people to organize to enforce satisfaction of that desire is for them to guarantee the state does not victimize any member of the group. So on desirist grounds I'd expect the temporal asymmetry to tend to emerge strategically as the conventional morality applying only among the ruling social class of a society: only humans and not animals in a modern democracy, only men when women lack suffrage, only whites when blacks are subjugated, only nobles in aristocratic society, and so on. (I can readily think of supporting examples, but I'm not confident in my inability to think of contrary examples, so I do not yet claim that history bears out desirism's prediction on this matter.)

Of course, if you plan to build an AI capable of aquiring power over all current life, you may have strong reason to incorporate the temporal asymmetry as a root principle. It wouldn't likely emerge out of unbalanced power relations. And similarly, if you plan on bootstrapping yourself as an em into a powerful optimizer, you have strong reason to precommit to the temporal asymmetry so the rest of us don't fear you. :D

Comment author: [deleted] 25 August 2013 03:24:18AM 0 points [-]

If the utility monster is so monstrously sad, why would it be asking not to be killed? Usually, a decent rule of thumb is that if someone doesn't want to die there's a good chance their lives are somewhat worth living.

The converse perspective implies that we should either (1) be spawning as many babies as possible, as fast as possible, or (2) anyone who disagrees with 1 should go on a murder spree, or at best consider such murder sprees ethically unimportant.

This conclusion is technically incorrect. For new babies, you don't know in advance whether their lives will be worth living. Even if you go with positive expected value (and no negative externalities), you can still have better alternatives, e.g. do science now that makes many more and much better lives much later; "as fast as possible" is logically unnecessary.

Also, killing sprees have side-effects on society that omissions of reproduction don't have, e.g. already-born people will take costly measures not to be killed (etc...)

Comment author: MugaSofer 21 August 2013 03:51:27PM -1 points [-]

I worries me how many people have come to exactly those conclusions. I mean, it's not very many, but still ...

Comment author: SaidAchmiz 19 August 2013 07:21:29PM 0 points [-]

If you prefer a happy monster to no monster and no monster to a sad monster, then you prefer a happy monster to a sad monster

Only if your preferences are transitive.

Comment author: linkhyrule5 19 August 2013 07:48:19PM *  1 point [-]

If you have any sort of coherent utility system at all, they will be.

A better point is that "no monster" just means you're shunting the problem to poor Alternate You in another many-worlds branch, whereas killing a happy monster means actually decreasing the number of universes with the monster in it by one.

Comment author: TsviBT 17 August 2013 10:11:37AM 1 point [-]

I don't get it, how is that different from any old bad thing you want to avoid?

Comment author: Leon 17 August 2013 12:32:54AM 12 points [-]

This is just the (intended) critique of utilitarianism itself, which says that the utility functions of others are (in aggregate) exactly what you should care about.

Comment author: DanArmak 17 August 2013 01:03:18PM *  0 points [-]

Utilitarianism doesn't say that. Maybe some variant says that, but general utilitarianism merely says that I should have a single self-consistent utility function of my own, which is free to assign whatever weights to others.

ETA: PhilGoetz says otherwise. I believe that he is right, he's an expert in the subject matter. I am surprised and confused.

Comment author: Kaj_Sotala 17 August 2013 11:23:59PM *  12 points [-]

If you're unsure of a question of philosophy, the Stanford Encyclopedia of Philosophy is usually the best place to consult first. Its history of utilitarianism article says that

Though there are many varieties of the view discussed, utilitarianism is generally held to be the view that the morally right action is the action that produces the most good. There are many ways to spell out this general claim. One thing to note is that the theory is a form of consequentialism: the right action is understood entirely in terms of consequences produced. What distinguishes utilitarianism from egoism has to do with the scope of the relevant consequences. On the utilitarian view one ought to maximize the overall good — that is, consider the good of others as well as one's own good.

The Classical Utilitarians, Jeremy Bentham and John Stuart Mill, identified the good with pleasure, so, like Epicurus, were hedonists about value. They also held that we ought to maximize the good, that is, bring about ‘the greatest amount of good for the greatest number’.

Utilitarianism is also distinguished by impartiality and agent-neutrality. Everyone's happiness counts the same. When one maximizes the good, it is the good impartially considered. My good counts for no more than anyone else's good. Further, the reason I have to promote the overall good is the same reason anyone else has to so promote the good. It is not peculiar to me.

Note the last paragraph in particular. Utilitarianism is agent-neutral: while it does take your utility function into account, it gives it no more weight than anybody else's.

The "general utilitarianism" that you mention is mostly just "having a utility function", not "utilitarianism" - utility functions might in principle be used to implement ethical theories quite different from utilitarianism. This is a somewhat common confusion on LW (one which I've been guilty of myself, at times). I think it has to do with the Sequences sometimes conflating the two.

EDIT: Also, in SEP's Consequentialism article:

Since classic utilitarianism reduces all morally relevant factors (Kagan 1998, 17–22) to consequences, it might appear simple. However, classic utilitarianism is actually a complex combination of many distinct claims, including the following claims about the moral rightness of acts:

Consequentialism = whether an act is morally right depends only on consequences (as opposed to the circumstances or the intrinsic nature of the act or anything that happens before the act).

Actual Consequentialism = whether an act is morally right depends only on the actual consequences (as opposed to foreseen, foreseeable, intended, or likely consequences).

Direct Consequentialism = whether an act is morally right depends only on the consequences of that act itself (as opposed to the consequences of the agent's motive, of a rule or practice that covers other acts of the same kind, and so on).

Evaluative Consequentialism = moral rightness depends only on the value of the consequences (as opposed to non-evaluative features of the consequences).

Hedonism = the value of the consequences depends only on the pleasures and pains in the consequences (as opposed to other goods, such as freedom, knowledge, life, and so on).

Maximizing Consequentialism = moral rightness depends only on which consequences are best (as opposed to merely satisfactory or an improvement over the status quo).

Aggregative Consequentialism = which consequences are best is some function of the values of parts of those consequences (as opposed to rankings of whole worlds or sets of consequences).

Total Consequentialism = moral rightness depends only on the total net good in the consequences (as opposed to the average net good per person).

Universal Consequentialism = moral rightness depends on the consequences for all people or sentient beings (as opposed to only the individual agent, members of the individual's society, present people, or any other limited group).

Equal Consideration = in determining moral rightness, benefits to one person matter just as much as similar benefits to any other person (= all who count count equally).

Agent-neutrality = whether some consequences are better than others does not depend on whether the consequences are evaluated from the perspective of the agent (as opposed to an observer).

Comment author: AlexMennen 17 August 2013 05:14:02PM 6 points [-]

PhilGoetz says otherwise. I believe that he is right, he's an expert in the subject matter. I am surprised and confused.

PhilGoetz is correct, but your confusion is justified; it's bad terminology. Consequentialism is the word for what you thought utilitarianism meant.

Comment author: DanArmak 17 August 2013 06:38:34PM *  2 points [-]

I thought a consequentialist is not necessarily a utilitarianist. Utilitarianism should mean that all values are comparable and tradeable via utilons (measured in real numbers), and (ideally) a single utility function for measuring the utility of a thing (to someone). The Wikipedia page you link lists "utilitarianism" as only one of many philosophies compatible with consequentialism.

Comment author: AlexMennen 17 August 2013 07:17:52PM 6 points [-]

You are correct that utilitarianism is a type of consequentialism, and that you can be a consequentialist without being a utilitarian. Consequentialism says that you should choose actions based on their consequences, which pretty much forces you into the VNM axioms, so consequentialism is roughly what you described as utilitarianism. As I said, it would make sense if that is what utilitarianism meant, but despite my opinions, utilitarianism does not mean that. Utilitarianism says that you should choose the action that results in the consequence that is best for all people in aggregate.

Comment author: DanArmak 17 August 2013 08:57:29PM 3 points [-]

I see. Thank you for clearing up the terminology.

Then what would the term be for a VNM-rational, moral anti-realist who explicitly considers others' welfare only because they figure in his utility function, and doesn't intrinsically care about their own utility functions?

Comment author: AlexMennen 17 August 2013 10:10:13PM 2 points [-]

I don't know of a commonly agreed-upon term for that, unfortunately. "Utility maximizer", "VNM-rational agent", and "homo economicus" are similar to what you're looking for, but none of these terms imply that the agent's utility function is necessarily dependent on the welfare of others.

Comment author: Juno_Watt 19 August 2013 03:00:55PM 1 point [-]

Rational self-interest?

Comment author: Jack 19 August 2013 02:02:03PM 1 point [-]

Then what would the term be for a VNM-rational, moral anti-realist who explicitly considers others' welfare only because they figure in his utility function, and doesn't intrinsically care about their own utility functions?

"Utilitarian" and all the other labels in normative ethics are labels for what ought to be in an agent's utility function. So I would call this person someone who rightly stopped caring about normative philosophy.

Comment author: blacktrance 23 August 2013 05:49:34AM 0 points [-]

To use an Objectivist term, it's a person who's acting in his "properly understood self-interest".

Comment author: Lukas_Gloor 19 August 2013 01:36:58AM 0 points [-]

Utilitarianism says that you should choose the action that results in the consequence that is best for all people in aggregate.

Not just people but all the beings that serve as "vessels" for whatever it is that matters (to you). According to most common forms of utilitarianism, "utility" consists of happiness and/or (the absence of) suffering or preference satisfaction/frustration.

Comment author: PhilGoetz 17 August 2013 03:20:32PM *  3 points [-]

Thanks, but I tend to define and use my own terminology, because the standard terms are too muddled to use. I am an expert in my own terminology. Leon is talking about utilitarianism as the word is usually, or at least historically, used outside LessWrong, as a computation that everyone can perform and get the same answer, so society can agree on an action.

Comment author: DanArmak 17 August 2013 04:03:09PM *  1 point [-]

a computation that everyone can perform and get the same answer, so society can agree on an action.

But that computation is still a two-place function; it depends on the actual utility function used. Surely "classical" utilitarianism doesn't just assume moral-utility realism. But without "utility realism" there is no necessary relation between the monster's utility according to its own utility function, and the monster's utility according to my utility function.

Humans are similar, so they have similar utility functions, so they can trade without too many repugnant outcomes. And because of this we sometimes talk of utility functions colloquially without mentioning whose functions they are. But a utility monster is by definition unlike regular humans, so the usual heuristics don't apply; this is not surprising.

When I thought of a "utility monster" previously, I thought of a problem with the fact that my (and other humans') utility functions are really composed of many shards of value and are bad at trading between them. So a utility monster would be something that forced me to sacrifice a small amount of one value (murder a billion small children) to achieve a huge increase in another value (make all adults transcendently happy). But this would still be a utility monster according to my own utility function.

On the other hand, saying "a utility monster is anything that assigns huge utility to itself - which forces you to assign huge utility to it too, just because it says so" - that's just a misunderstanding of how utility works. I don't know if it's a strawman, but it's definitely wrong.

I notice that I am still confused about what different people actually believe.

Comment author: PhilGoetz 19 August 2013 11:34:44PM *  1 point [-]

If by "moral-utility realism" you mean the notion that there is one true moral utility function that everyone should use, I think that's what you'll find in the writings of Bentham, and of Nozick. Not explicitly asserted; just assumed, out of lack of awareness that there's any alternative. I haven't read Nozick, just summaries of him.

Historically, utilitarianism was seen as radical for proposing that happiness could by itself be the sole criterion for an ethical system, and for being strictly consequentialist. I don't know when the first person proposed that it makes sense to talk about different people having different utility functions. You could argue it was Nietzsche, but he meant that people could have dramatically opposite value systems that are necessarily at war with each other, which is different from saying that people in a single society can use different utility functions.

(What counts as a "different" belief, BTW, depends on the representational system you use, particularly WRT quasi-indexicals.)

Anyway, that's no longer a useful way to define utilitarianism, because we can use "consequentialism" for consequentialism, and happiness turns out to just be a magical word, like "God", that you pretend the answers are hidden inside of.

Comment author: MugaSofer 18 August 2013 10:57:32PM *  -2 points [-]

"Utilitarianism" is sometimes used for both that "variant" (valuing utility) and the meaning you ascribe to it (defining "value" in terms of utility.) The Utility Monster is designed to interfere with the former meaning. Which is the correct meaning ...

Comment author: Jack 19 August 2013 02:03:39PM 4 points [-]

So this comment seems straightforwardly confused about what utilitarianism is. Why is it up this high?

Comment author: SaidAchmiz 19 August 2013 03:02:40PM 6 points [-]

I don't know. Patterns of upvotes and downvotes on LessWrong still mystify me.

You are right; I was, when I wrote the grandparent, confused about what utilitarianism is. Having read the other comment threads on this post, I think the reason is that popular usage of the term "utilitarianism" on this site does not match its usage elsewhere. What I thought utilitarianism was before I started commenting on LessWrong, and what I think utilitarianism is now that I've gotten unconfused, are the same thing (the same silly thing, imo); my interim confusion is more or less described in this thread.

My primary objections to utilitarianism remain the same: intersubjective comparability of utility (I am highly dubious about whether it's possible), disagreement about what sorts of things experience utility in a relevant way (animals? nematodes? thermostats?) and thus ought to be considered in the calculation, divergence of utilitarian conclusions from foundational moral intuitions in non-edge cases, various repugnant conclusions.

As far as the utility monster goes, I think the main issue is that I am really not inclined to grant intersubjective comparability of experienced utility. It just does not seem coherent or meaningful to me to say that some creature, clearly very different from humans, experiences, say, "twice as much" utility at some given moment than a human does. How on earth did we come up with this number? How do we come up with any number in such a case? Forget numbers — how do we even create an ordering of experienced utility between different sorts of creatures?

Comment author: novalis 17 August 2013 12:49:34AM 14 points [-]

why should I care?

Isn't this an objection to any theory of ethics?

Comment author: metastable 17 August 2013 01:04:17AM 3 points [-]

As a lone question, it could be, but the point of his post is that even stipulating utilitarianism it does not follow that you or I should maximize the utils of Mr. Utility Monster.

Comment author: SaidAchmiz 17 August 2013 01:10:44AM 1 point [-]

No, only theories of ethics that say that I should care about things that I do not already care about.

And it is, in case, not an objection but a question. :)

Comment author: Juno_Watt 18 August 2013 11:53:15PM 0 points [-]

Not necesarily a fatal one.

Comment author: MugaSofer 18 August 2013 10:55:53PM -1 points [-]

I believe some famous philosopher already has this point named after him.

Comment author: PrometheanFaun 18 August 2013 06:15:10AM *  3 points [-]

In more personal terms, if you fit your utility function to your friends and decide what is best for them based on that, rather than letting them to their own alien utility functions and helping them to get what they really want rather than what you think they should want, you are not a good friend. I say this because if the function you're pushing prohibits me from fulfilling my goals, I will avoid the fuck out of you. I will lie about my intentions. I will not trust you. It doesn't matter if your heart's in the right place.

Comment author: metastable 18 August 2013 07:35:11AM *  1 point [-]

fit your utility function to your friends and decide what is best for them based on that, rather than letting them to their own alien utility functions and helping them to get what they really want rather than what you think they should want.

The definition of want here is ambiguous, and that makes this is a little hard to parse. How are you defining "want" with respect to "utility function"? Do you mean to make them equivalent?

If by "want" you mean desire in accord with their appropriately calibrated utility functions, then, well, sure. A friend is selfish by any common understanding if he doesn't care about his buddies' needs.

But it seems like you might be saying that he's a bad friend for not helping his friends get what they want regardless of what he thinks they need. While this is one view of friendship, it is not nearly as common, and I can make a strong case against it. Such a view would require that you help addicts continue to use, that you help self-destructive people harm themselves, that you never argue with a friend over a toxic relationship you can see, and that you never really try to convince a friend to try anything he or she doesn't think he or she will like.

I will lie about my intentions. I will not trust you. It doesn't matter if your heart's in the right place.

Sadly, this happens. If you're saying you think it should happen more, okay. But I would consider a friend pretty poor if he or she weren't willing to risk a little alienation because of genuine concern.

Comment author: PrometheanFaun 18 August 2013 09:33:52AM *  0 points [-]

I meant the former case, what use are people who's wants don't perfectly align with their utility function? xJ I guess whenever the latter case occurs in my life, that's not really what's happening. The dog thinks it's driving away a threat I don't recognise, when really it's driving away an opportunity it's incapable of recognising. Sometimes it might even be the right thing for them to do, even by my standards, given a lack of information. I still have to manage them like a burdensome dog.

Comment author: MugaSofer 18 August 2013 10:53:22PM -2 points [-]

The definition of want here is ambiguous, and that makes this is a little hard to parse. How are you defining "want" with respect to "utility function"? Do you mean to make them equivalent?

If by "want" you mean desire in accord with their appropriately calibrated utility functions, then, well, sure. A friend is selfish by any common understanding if he doesn't care about his buddies' needs.

Assuming that the utility monster is not, somehow, mistaken regarding it's wants...

Comment author: PhilGoetz 17 August 2013 12:32:18AM *  7 points [-]

In this post, I wrote: "The standard view ... obliterates distinctions between the ethics of that person, the ethics of society, and "true" ethics (whatever they may be). I will call these "personal ethics", "social ethics", and "normative ethics" ."

Using that terminology, you're objecting to the more general point that social utility functions shouldn't be confused with personal utility functions. All mainstream discussion of utilitarianism has failed to make this distinction, including the literature on the utility monster.

However, it's still perfectly valid to talk about using utilitarianism to construct social utility functions (e.g., those to encode into a set of community laws), and in that context the utility monster makes sense.

Utilitarianism, and all ethical systems, are usually discussed with the flawed assumption that there is one single proper ethical algorithm, which, once discovered, should be chosen by society and implemented by every individual. (CEV is based on the converse of this assumption: that you can use a personal utility function, or the average of many personal utility functions. as a social utility function.)

Comment author: Jack 19 August 2013 01:40:29PM 3 points [-]

Using that terminology, you're objecting to the more general point that social utility functions shouldn't be confused with personal utility functions. All mainstream discussion of utilitarianism has failed to make this distinction, including the literature on the utility monster.

That's because the mainstream discussion of utilitarianism the normative ethical theory has almost nothing at all to do with the concept of utility in economics.

Comment author: DanArmak 17 August 2013 01:05:50PM 0 points [-]

Using that terminology, you're objecting to the more general point that social utility functions shouldn't be confused with personal utility functions. All mainstream discussion of utilitarianism has failed to make this distinction, including the literature on the utility monster.

I don't doubt that you're right, but I find that stunning. How can this distinction not be made?

In the trivial example Selfish World, everyone assigns greater utility to themselves than to anyone else. That surely doesn't mean utilitarianism is useless - people can still make decisions and trade utilons!

Comment author: Jack 19 August 2013 01:38:09PM *  2 points [-]

"Utility" refers a representation of preference over goods and services in economics and decision theory. This usage dates to the late 1940s. It has almost nothing at all to do with the normative theory of utilitarianism which dates to the late 1780s.

As a normative theory is supposed to tell you how you ought to act saying "oh everyone ought to follow their own utility function" is completely without content. The entire content of the theory is that my utils and your utils are actually the same kind of thing such that we can combine them one-to-one in a calculation to determine how to act (we want to maximize total utils).

That surely doesn't mean utilitarianism is useless - people can still make decisions and trade utilons!

This isn't utilitarianism. It is ethical egoism as described by economists.

Comment author: Juno_Watt 18 August 2013 11:47:08PM *  0 points [-]

Utilitarianism, and all ethical systems, are usually discussed with the flawed assumption that there is one single proper ethical algorithm, which, once discovered, should be chosen by society and implemented by every individual

That flaw is not obvious to me. But the flaw in anything-goes ethics is.

Comment author: Randaly 17 August 2013 08:47:51AM *  5 points [-]

The utility monster is a concept created to critique utilitarianism. If you are not a utilitarian, then it is not a criticism of your beliefs. If you need to ask why you should care about another being's utility, and it's a serious rather than a rhetorical question, then you aren't a utilitarian.

Comment author: MugaSofer 18 August 2013 10:51:52PM *  2 points [-]

Because you care about other agents' utility. Right? That's what the Utility Monster is meant to be an issue with.

Comment author: DanielLC 23 August 2013 02:42:45AM 0 points [-]

The utility monster is generally given as opposition to hedonistic or preference utilitarianism in particular. It's not an objection to arbitrary utility functions. There's no monster that can be an increasing number of paperclips.