Today's post, Not for the Sake of Happiness (Alone) was originally published on 22 November 2007. A summary (taken from the LW wiki):

 

Tackles the Hollywood Rationality trope that "rational" preferences must reduce to selfish hedonism - caring strictly about personally experienced pleasure. An ideal Bayesian agent - implementing strict Bayesian decision theory - can have a utility function that ranges over anything, not just internal subjective experiences.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Truly Part of You, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New Comment
30 comments, sorted by Click to highlight new comments since:

For what it's worth, I value happiness alone (though not my happiness in particular).

The funny thing is you probably don't even know what happiness is. Do you not value pleasure, contentment, joy, or satisfaction? None of these things might even turn out to be single things on closer inspection (like Jade).

I don't understand.

I don't know exactly what happiness is, but I'm pretty certain it's something like the partial derivative of desires with respect to beliefs i.e. you're happy if you start wanting what's going on more. It might be the dot product of desires and beliefs i.e. you believe your desires are fulfilled.

[-][anonymous]10

You sure about that? You could be sure, but lets say you I told you that in 5 years you would become demented. This dementia would not make you unhappy, in fact it would make you slightly happier and your condition would not make any person unhappier. A very artificial situation but still. Would you still consider it a good thing that you would become demented?

The idea of being demented makes me somewhat unhappy, which could certainly cause me to choose unhappiness over dementia, but that's a statement of my desires, not my moral beliefs. Morally, dementia would be better.

[-][anonymous]00

The idea of being demented makes me somewhat unhappy, which could certainly cause me to choose unhappiness over dementia

If we changed the condition to 10 seconds (instead of 5 years) would that make you chose dementia, for sure?

but that's a statement of my desires, not my moral beliefs. Morally, dementia would be better.

By Morally I assume that you mean something you should do? But how did you come to that conclusion, that it is morally to chose dementia (happiness) and how come you deem it morally to care about others happiness? (I scenically my questions are not conceived as utter nonsense.)

If we changed the condition to 10 seconds (instead of 5 years) would that make you chose dementia, for sure?

I think so. I'm not certain why.

But how did you come to that conclusion, that it is morally to chose dementia (happiness) and how come you deem it morally to care about others happiness?

It's better because I'm happier. It might be somewhat bad for Present!me (who feels bad making that decision), but I assume Future!me's happiness will make up for that.

and how come you deem it morally to care about others happiness?

There's nothing moral about caring about others' happiness. It's their happiness itself that is moral. Happiness is good.

So what's your response to the pill question?

I'd take a pill to make me happy. The exact kind of joy is irrelevant.

I prefer to place value on liberty and excellence over happiness. That is; the breadth of available options for individuals to self-determine.

I would rather be twice as capable while half as happy than I would be twice as happy while half as capable.

Excellence?

I suspect a lot of people consider liberty important because they like it. I don't. I very much prefer my choices being made for me. If someone gave me more freedom, I wouldn't like that. Could they really be said to be doing me a favor? Or is it better that way, and the fact that I'm against it doesn't matter?

Excellence?

Excellence as in, "prowess", "capability", "competence", "skillfulness", "strength".

I very much prefer my choices being made for me. [...] If someone gave me more freedom, I wouldn't like that. Could they really be said to be doing me a favor?

Would you agree that while you would prefer to have your choices made for you, you would strongly prefer to have some say in who makes those choices?

I ask this question as a way to attempt to reveal that we're focusing on two different things with the notion of 'freedom'. You associate "freedom" with "range of choices". I associate "freedom" with "range of outcomes". Normally, these are indistinguishable from one another. But there are practical cases where they aren't. For example: a voluntary slave need only make one choice: who is his master?

[-][anonymous]00

I ask this question as a way to attempt to reveal that we're focusing on two different things with the notion of 'freedom'. You associate "freedom" with "range of choices". I associate "freedom" with "range of outcomes".

Wow I don't know if it was your intention but you just made the most concise/elegant distinction between libertarian free will (outcome) and Compatibilism free will (choice), Bravo!

But then have to ask: by range of outcomes do you mean expected range of outcomes or genuine range of outcomes (real in the sens that not even Laplace's demon could know the outcome for sure?.)

Wow I don't know if it was your intention but you just made the most concise/elegant distinction between libertarian free will (outcome) and Compatibilism free will (choice), Bravo!

That's rather interesting, since I myself am a compatibilist and a physicalist. My phrasing was not meant to be an argument for libertinism over compatibilism / determinism, and in fact the definition of freedom as being associated with a greater range of available outcomes is entirely compatible with, well, compatibilism.

(real in the sens that not even Laplace's demon could know the outcome for sure?.)

I do not ascribe to the notion that the universe is wholly deterministic anyhow, so Laplace's demon would simply be too confused... although maybe he'll know something we don't.

to answer you more directly, I don't know that there's a material difference between "expected range of outcomes" and "genuine range of outcomes", as I was speaking in the abstract anyhow.

[-][anonymous]00

But then what is the difference between "range of choices" and "expected range of outcomes"?

I'd want it to be someone who makes good choices, since that will make me happier. Other than that, choosing who is just another choice I'd wish to avoid.

I don't want a range of outcomes. I want a good outcome.

Are you trying to figure out what makes me happy, or whether or not I care about freedom on moral grounds? If freedom did make me happy, I'd just talk about a hypothetical person who preferred slavery. I already told you I only find happiness morally relevant.

I don't want a range of outcomes. I want a good outcome.

These are synonymous when we must remain agnostic as to what each individual would select as a "good outcome" for his or her self.

Are you trying to figure out what makes me happy, or whether or not I care about freedom on moral grounds?

No. My argument is one of practical utility, not of moral virtue. If we expand universally the range of available outcomes then the number of "good outcomes" increases for each individual because each individual is more likely to have access to the things he or she actually wants as an individual.

If we expand universally the range of available outcomes then the number of "good outcomes" increases for each individual because each individual is more likely to have access to the things he or she actually wants as an individual.

Are you saying that freedom is an instrumental value, and that we actually agree on terminal values?

Are you saying that freedom is an instrumental value, and that we actually agree on terminal values?

I would be more inclined to say that if you prefer to be happy then you should have the freedom -- the option -- to be happy.

So I don't know that we agree on that -- as I would not prefer to be "happy" (in fact, I worry very much about becoming content and as a result sliding into complacency; I believe dissatisfaction with the now is an integral element of what makes me personally a "worthwhile" human being) -- but I do know that my belief in freedom as currently expressed means that just because I want to be one way does not mean that I am asserting that all people should wind up like me.

Diversity of individual outcomes in order to allow individuals to seek out and obtain their individual preferences (in a manner that does not directly impede the ability of others to do the same) is (or is close to) an intrinsic good.

So, freedom is an instrumental value, but happiness is not the terminal value?

It sounds like your terminal value is preference fulfillment or something to that extent.

So, freedom is an instrumental value, but happiness is not the terminal value?

I'm not sure that the mere fact that something is a terminal value prevents it from also being an instrumental value. Perhaps I might agree with the notion that "maintaining high instrumental value is a terminal value" -- though I haven't really put deep thought into that one. I'll have to consider it.

It sounds like your terminal value is preference fulfillment or something to that extent.

Passively, yes.

Is that a yes?

Edit: Whoops. I didn't notice that you weren't the person I was originally talking to.

The Link is irrelevant. It's about instrumental values. I was talking about terminal values. I'm not sure what Logos01 was talking about, but if it is instrumental values, this isn't so much a debate as a mutual misunderstanding, and not much is relevant.

How are happiness and unhappiness weighed against each other, to become a single value?

Is there a strict boundary between emotions, or a sliding scale among them all?

How are happiness and unhappiness weighed against each other, to become a single value?

I consider unhappiness negative happiness. If you want to do what you're currently doing more, you're happy. The more it makes you want to do it, the happier you are. If it makes you want to do it more by a negative amount, it's negative happiness.

[-][anonymous]00

I'm not sure what I value, not sure if I could reduce all my values to pleasure and pain, might be but:

My biggest beef with (psychological) hedonism that it seams somewhat incoherent if you think that personal identity can be explained in terms of a narrative center of gravity: Why should I care about future mes'? I'm assuming that you have to act to maximize pleasure - pain in at any given time, you are not allowed to say "well this gives me more pleasure - pain in the long run", because if you do, you have sneaked in the value "future me is also important"/"maintaining personal identity is important" through the back door..

While I basically agree with you about hedonism, I think you're being a bit too glib about your reason. Even someone who takes a pill that will give them a minute of ecstasy followed by death is caring about their "future self"; after all, the experience of putting a pill in their mouths is hardly worth pursuing for its own sake. This notion that future-me and present-me are sharply distinguishable requires clearer delineation to be useful.

[-][anonymous]-20

But then I don't think going from future me to for example my children is so far. Edit: crossed out "not" before so)

Though I have encountered people who hold to "strong psychological hedonism".

-- "But wouldn't complete headwireing, the way you describe it kill the person? No memories no coherent thoughts just bliss-stasis."

-- "Well you only want to maintain the person because in this very moment it gives you pleasure/less pain."