Grognor comments on Not for the Sake of Happiness (Alone) - Less Wrong

48 Post author: Eliezer_Yudkowsky 22 November 2007 03:19AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Grognor 29 September 2011 03:42:22AM *  5 points [-]

In the agonizing process of reading all the Yudkowsky Less Wrong articles, this is the first one I have had any disagreement with whatsoever.

This is coming from a person who was actually convinced by the biased and obsolete 1997 singularity essay by Yudkowsky.

Only, it's not so much a disagreement as it is a value differential. I don't care the processes by which one achieves happiness. The end results are what matter, and I'll be damned if I accept having one less hedon or one less utilon out there because of a perceived value in working toward them rather than automatically gaining them. It sounds to me like expecting victims of depression to work through it and experience the joy of overcoming depression, instead of, say, our hypothetical pill that just cures their depression. It is a sadness that nothing like that exists.

At the risk of (further) lowering my own status, I'll also say that I really really really do wish the "do anything" Star Trek Holodecks were here. Now, it might matter to me that simulated oral sex is not from a real person who made that decision on her evolution-based human terms, but that is another matter of utilons.

Edited to add: perhaps worth noting is that I would have accepted the deal given by the Superhappies in Three Worlds Collide, though I might have tried to argue that the "having humans eat babies as well" thing is not necessary, even knowing I probably would not succeed.

Comment author: DSimon 25 October 2011 02:01:21AM *  3 points [-]

Since you're differentiating utilons from hedons, doesn't that kind of follow the thrust of the article? That is, the point that the OP is arguing against is that utilons are ultimately the same thing as hedons; that all people really want is to be happy and that everything else is an instrumental value towards that end.

Your example of the perfect anti-depressant is I think somewhat misleading; the worry when it comes to wire-heading is that you'll maximize hedons to the exclusion of all other types of utilon. Curing depression is awesome not only because it increases net hedons, but also because depression makes it hard to accomplish anything at all, even stuff that's about whole other types of utilons.

Comment author: Grognor 25 October 2011 04:46:05AM *  1 point [-]

The subject in detail is too complicated to bother with in this comment thread because it is discussed in much greater detail elsewhere, so I'll just bring up two things.

1) In the last month I've been thinking pretty darned carefully and am now really really unsure whether I'd accept the Superhappies' deal and am frankly glad I'll never have to make that choice.

2) Some of my own desires are bad, and if I were to take a pill that completely eliminated those desires, I would. The idea that what humanity wants right now is what it really wants is definitely not certain, as most certainly uncertain as uncertainties get. So the real question is, why does our utility function act the way it does? There was no purpose for it and if we can agree on a way to change it, we should change it, even if that means

other types of utilon

go extinct.

Comment author: DSimon 25 October 2011 01:59:03PM 0 points [-]

The idea that what humanity wants right now is what it really wants is definitely not certain

Strongly agreed! But that's why the gloss for CEV talks about stuff like what we would ideally want if we were smarter and knew more.

Comment author: momothefiddler 04 May 2012 09:32:22PM 1 point [-]

The basic point of the article seems to be "Not all utilons are (reducible to) hedons", which confuses me from the start. If happiness is not a generic term for "perception of a utilon-positive outcome", what is it? I don't think all utilons can be reduced to hedons, but that's only because I see no difference between the two. I honestly don't comprehend the difference between "State A makes me happier than state B" and "I value state A more than state B". If hedons aren't exactly equivalent to utilons, what are they?

An example might help: I was arguing with a classmate of mine recently. My claim was that every choice he made boiled down to the option which made him happiest. Looking back on it, I meant to say it was the option whose anticipation gave him the most happiness, since making choices based on the result of those choices breaks causality. Anyway, he argued that his choices were not based on happiness. He put forth the example that, while he didn't enjoy his job, he still went because he needed to support his son. My response was that while his reaction to his job as an isolated experience was negative, his happiness from {job + son eating} was more than his happiness from {no job + son starving}.

I thought at the time that we were disagreeing about basic motivations, but this article and its responses have caused me to wonder if, perhaps, I don't use the word 'happiness' in the standard sense.

Giving a hyperbolic thought excercise: If I could choose between all existing minds (except mine, to make the point about relative values) experiencing intense agony for a year and my own death, I think I'd be likely to choose my death. This is not because I expect to experience happiness after death, but because considering the state of the universe in the second scenario brings me more happiness than considering the state of the universe in the first. As far as I can tell, this is exactly what it means to place a higher value on the relative pleasure and continuing functionality of all-but-one mind than on my own continued existence.

To anyone who argues that utilons aren't exactly equivalent to hedons (either that utilons aren't hedons or that utilons are reducible to hedons), please explain to me what you (and my sudden realisation that you exist allows me to realise you seem amazingly common) think happiness is.

Comment author: DSimon 06 May 2012 12:31:28AM 0 points [-]

Consider the following two world states:

  1. A person important to you dies.
  2. They don't die, but you are given a brain modification that makes it seem to you as though they had.

The hedonic scores for 1 and 2 are identical, but 2 has more utilons if you value your friend's life.

Comment author: momothefiddler 06 May 2012 01:19:02AM -1 points [-]

The hedonic scores are identical and, as far as I can tell, the outcomes are identical. The only difference is if I know about the difference - if, for instance, I'm given a choice between the two. At that point, my consideration of 2 has more hedons than my consideration of 1. Is that different from saying 2 has more utilons than 1?

Is the distinction perhaps that hedons are about now while utilons are overall?

Comment author: TheOtherDave 06 May 2012 02:05:13AM 1 point [-]

Talking about "utilons" and "hedons" implies that there exists some X such that, by my standards, the world is better with more X in it, whether I am aware of X or not.

Given that assumption, it follows that if you add X to the world in such a way that I don't interact with it at all, it makes the world better by my standards, but it doesn't make me happier. One way of expressing that is that X produces utilons but not hedons.

Comment author: momothefiddler 06 May 2012 02:21:15AM 1 point [-]

I would not have considered utilons to have meaning without my ability to compare them in my utility function.

You're saying utilons can be generated without your knowledge, but hedons cannot? Does that mean utilons are a measure of reality's conformance to your utility function, while hedons are your reaction to your perception of reality's conformance to your utility function?

Comment author: TheOtherDave 06 May 2012 03:20:32AM 0 points [-]

I'm saying that something can make the world better without affecting me, but nothing can make me happier without affecting me. That suggests to me that the set of things that can make the world better is different from the set of things that can make me happy, even if they overlap significantly.

Comment author: momothefiddler 06 May 2012 03:26:29AM 0 points [-]

That makes sense. I had only looked at the difference within "things that affect my choices", which is not a full representation of things. Could I reasonably say, then, that hedons are the intersection of "utilons" and "things of which I'm aware", or is there more to it?

Another way of phrasing what I think you're saying: "Utilons are where the utility function intersects with the territory, hedons are where the utility function intersects with the map."

Comment author: TheOtherDave 06 May 2012 03:30:34AM 1 point [-]

I'm not sure how "hedons" interact with "utilons".
I'm not saying anything at all about how they interact.
I'm merely saying that they aren't the same thing.

Comment author: notsonewuser 02 October 2013 07:59:04PM 0 points [-]

I don't have any objection to you wireheading yourself. I do object to someone forcibly wireheading me.