Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

DanielLC comments on Hedonic vs Preference Utilitarianism in the Context of Wireheading - Less Wrong Discussion

6 Post author: jkaufman 29 June 2012 01:50PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (53)

You are viewing a single comment's thread. Show more comments above.

Comment author: Ghatanathoah 21 February 2013 01:01:32AM -1 points [-]

Also, it seems like desire fulfillment just alters the kind of wireheading you do. Rather than modifying people to make them happy, you modify them to desire what currently is true.

Most people would strongly desire to not be modified in such a fashion. It's really no different from wire-heading them to be happy, you're destroying their terminal values, essentially killing a part of them.

Of course, you could take this further by agreeing to leave existing people's preferences alone, but from now on only create people who desire what is currently true. This seems rather horrible as well, what it suggests to me is that there are some people with preference sets that it is morally better to create than others. It is probably morally better to create human beings with complex desires than wireheaded creatures that desire only what is true.

This in turn suggests to me that, in the field of population ethics, it is ideal utilitarianism that is the correct theory. That is, there are certain ideals it is morally good to promote (love, friendship, beauty, etc.) and that therefore it is morally good to create people with preferences for those things (i.e. creatures with human-like preferences).

Comment author: DanielLC 21 February 2013 04:52:46AM -1 points [-]

Most people would strongly desire to not be modified in such a fashion.

Yes, but only until they're modified. The desire fulfillment of their future selves will outweigh the desire unfulfillment of their present selves, resulting in a net increase in desire fulfillment.

Comment author: Ghatanathoah 22 July 2013 10:00:03PM 0 points [-]

Okay, I've been reading a bit more on this and I think I have found an answer from Derek Parfit's classic "Reasons and Persons." Parfit considers this idea in the section "What Makes Someone's Life Go Best," which can be found online here, with the relevant stuff starting on page 3.

In Parfit's example he considers a person who argue that he is going to make your life better by getting you addicted to a drug which creates an overwhelming desire to take it. The drug has no other effects, it does not cause you to feel high or low or anything like that, all it does is make you desire to take it. After getting you addicted this person will give you a lifetime supply of the drug, so you can always satisfy your desire for it.

Parfit argues that this does not make your life better, even though you have more satisfied desires than you used to. He defends this claim by arguing that, in addition to our basic desires, we also have what he calls "Global Preferences." These are 2nd level meta-preferences, "desires about desires." Adding or changing someone's desires is only good if it is in accordance with their "Global Preferences." Otherwise, adding to or changing their desires is bad, not good, even if the new desires are more satisfied than their old ones.

I find this account very plausible. It reminds me of Yvain's posts on Wanting, Liking, and Approving.

Parfit doesn't seem to realize it, but this theory also provides a way to reject his Mere Addition Paradox. In the same way that we have global preferences about what our values are, we can also have global moral rules about what amount and type of people it is good to create. This allows us to avoid both the traditional Repugnant Conclusion, and the far more repugnant conclusion that we ought to kill the human race and replace them with creatures whose preferences are easier to satisfy.

Now, you might ask, what if, when we wirehead someone, we change their Global Preferences as well, so they now globabally prefer to be wireheaded? Well, for that we can invoke our global moral rules about population ethics. Creating a creature with such global preferences, under such circumstances, is always bad, even if it lives a very satisfied life.

Comment author: DanielLC 23 July 2013 12:11:08AM 1 point [-]

Are you saying that preferences only matter if they're in line with Global Preferences?

Before there was life, there was no Global Preferences, which means that no new life had preferences in accordance with these Global Preferences, therefore no preferences matter.

Comment author: Ghatanathoah 23 July 2013 06:22:22AM *  1 point [-]

I'm saying creating new preferences can be bad if they violate Global Preferences. Since there were no Global Preferences before life began, the emergence of life did not violate any Global Preferences. For this reason the first reasoning creatures to develop essentially got a "free pass."

Furthermore, even if a preference is bad to create in the first place because it violates a Global Preference, that does not mean satisfying that newly created preference is bad. Parfit uses the following example to illustrate this: If I am tortured this will create a preference in me for the torture to stop. I have a strong Global Preference to never have this preference for the torture to stop come to exist in the first place. But once that desire is created, it would obviously be a good thing if someone satisfied it by ceasing to torture me.

Similarly, it would be a bad thing if the guy in Parfit's other example got you addicted to the drug, and then gave you the drugs to satisfy your addiction. But it would be an even worse thing if he got you addicted, and then didn't give you any drugs at all.

The idea that there are some preferences that it is bad to create, but also bad to thwart if they are created, also fits neatly with our intuitions about population ethics. Most people believe that it is bad for unwanted children to be born, but also bad to kill them if we fail to prevent them from being born (providing, of course, that their lifetime utility will be a net positive).

Comment author: DanielLC 23 July 2013 06:47:11AM 1 point [-]

Furthermore, even if a preference is bad to create in the first place because it violates a Global Preference, that does not mean satisfying that newly created preference is bad.

Doesn't that mean that if you satisfy it enough it's a net good?

If you give someone an addicting drug, this gives them a Global Preference-violating preference, causing x units of disutility. Once they're addicted, each dose of the drug creates y units of utility. If you give them more than x/y doses, it will be net good.

I have a strong Global Preference to never have this preference for the torture to stop come to exist in the first place.

What's so bad about being against torture? I can see why you'd dislike the events leading up to this preference, but the preference itself seems like an odd thing to dislike.

Comment author: Ghatanathoah 23 July 2013 06:51:30PM *  -1 points [-]

Doesn't that mean that if you satisfy it enough it's a net good?

No, in Parfit's initial example with the highly addictive drug your preference is 100% satisfied. You have a lifetime supply of the drug. But it still hasn't made your life any better.

This is like Peter Singer's "debit" model of preferences where all preferences are "debts" incurred in a "moral ledger." Singer rejected this view because if it is applied to all preferences it leads to antinatalism. Parfit, however, has essentially "patched" the idea by introducing Global Preferences. In his theory we use the "debit" model when a preference is not in line with a global preference, but do not use it if the preference is in line with a global preference.

What's so bad about being against torture? I can see why you'd dislike the events leading up to this preference, but the preference itself seems like an odd thing to dislike.

It's not that I dislike the preference, it's that I would prefer to never have it in the first place (since I have to be tortured in order to develop it). I have a Global Preference that the sorts of events that would bring this preference into being never occur, but if they occur in spite of this I would want this preference to be satisfied.

If you dislike that example, however, would you still agree that if someone forcibly addicted you to Parfit's hypothetical drug, it would be better if they gave you a lifetime supply of the drug than if they did not? (Assuming, of course, that taking the drug has no bad side effects, and getting rid of the addiction is not possible).

Comment author: DanielLC 23 July 2013 07:58:32PM 0 points [-]

No, in Parfit's initial example with the highly addictive drug your preference is 100% satisfied.

What if it's a preference that doesn't have a maximum amount of satisfaction? For example, if you get a drug that makes you into a paperclip maximizer, you can always add more paperclips. Does that mean that your preference is always 0% satisfied?

If you dislike that example, however, would you still agree that if someone forcibly addicted you to Parfit's hypothetical drug, it would be better if they gave you a lifetime supply of the drug than if they did not?

Only if it makes me happy. I'm not a preference utilitarian.

Me being addicted to a drug and getting it is no higher on my current preference ranking than being addicted to a drug and not getting it.

Comment author: Ghatanathoah 24 July 2013 04:43:24AM 0 points [-]

What if it's a preference that doesn't have a maximum amount of satisfaction? For example, if you get a drug that makes you into a paperclip maximizer, you can always add more paperclips. Does that mean that your preference is always 0% satisfied?

That opens up the question of infinities in ethics, which is a whole other can of worms. There's still considerable debate about how to deal with it and it creates lots of problems for both preference utilitarianism and hedonic utilitarianism.

For instance, let's imagine an immortal who will live an infinite number of days. We have a choice of letting them have one happy experience per day or twenty happy experiences per day (and he would prefer to have these happy experiences, so both hedonic and preference utilitarians can address this question).

Intuitively, we believe it is much better for him to have twenty happy experiences per day than one. But since he lives an infinite number of days, the total number of happy experiences he has is the same: Infinity.

I'm not sure quite how to factor infinite preferences or infinite happiness. We may have to treat it as finite in order to avoid such problems. But it seems like there should be some intuitive way to do so, in the same way we know that twenty happy experiences per day is better for the immortal than one.

Only if it makes me happy. I'm not a preference utilitarian.

It won't, according to Parfit's stipulations. Of course, if we get out of weird hypotheticals where this guy is the only person on Earth possessing the drug, it would probably make you unhappy to be addicted because you would end up devoting time towards the pursuit of the drug instead of happiness.

I personally place only moderate value on happiness. There are many preferences I have that I want to have satisfied, even if it makes me unhappy. For instance, I usually prefer knowing a somewhat depressing truth to believing a comforting falsehood. And there are sometimes when I deliberately watch a bad, unenjoyable movie because it is part of a series I want to complete, even if I have access to another stand-alone movie that I would be much happier watching (yes, I am one of the reasons crappy sequels exist, but I try to mitigate the problem by waiting until I can rent them).

Comment author: DanielLC 24 July 2013 05:54:44AM 1 point [-]

That opens up the question of infinities in ethics, which is a whole other can of worms. There's still considerable debate about how to deal with it and it creates lots of problems for both preference utilitarianism and hedonic utilitarianism.

With hedonic utilitarianism, you can run into problems with infinite utility, or unbounded utility if it's a distribution that has infinite expected utility. This is just a case of someone else having an unbounded utility function. It seems pretty pathetic to get a paradox because of that.

Comment author: Ghatanathoah 21 February 2013 07:35:40AM -1 points [-]

Yes, but only until they're modified. The desire fulfillment of their future selves will outweigh the desire unfulfillment of their present selves, resulting in a net increase in desire fulfillment.

One way this is typically resolved is something called the "prior existence view." This view considers it good to increase the desire fulfillment of those who already exist, and those who will definitely exist in the future, but does not necessarily grant extra points for creating tons of new desires and then fulfilling them. The prior-existence view would therefore hold that it is wrong to create or severely modify a person if doing so would inflict an unduly large disutility on those who would exist prior to that person's creation.

The prior-existence view captures many strong moral intuitions, such as that it is morally acceptable to abort a fetus to save the life of the mother, and that it is wrong to wirehead someone against their will. It does raise some questions, like what to do if raising the utility of future people will change their identity, but I do not think these are unresolvable.