pjeby comments on Raising the Sanity Waterline - Less Wrong

112 Post author: Eliezer_Yudkowsky 12 March 2009 04:28AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (207)

Sort By: Popular

You are viewing a single comment's thread. Show more comments above.

Comment author: pjeby 12 March 2009 07:10:03PM -1 points [-]

I don't understand your question. Are you trying to make a case for unhappiness being useful, or supporting the idea that unhappiness is not useful?

Comment author: Eliezer_Yudkowsky 12 March 2009 07:32:57PM 5 points [-]

I'm saying that if you're going to be unhappy about anything - a position I do currently lean toward, albeit with strong reservations - then you should be unhappy about facts.

Comment author: Vladimir_Nesov 12 March 2009 11:51:03PM 2 points [-]

Sometimes the important facts of which you worry are counterfactual. Which, after all, is what happens when you decide, determining the real decision, based on its comparison to your model of its unreal alternative.

Comment author: pjeby 12 March 2009 08:07:49PM 2 points [-]

In order to be unhappy "about" a fact, the fact has to have some meaning... a meaning which can exist only in your map, not the territory, since the fact or its converse have to have some utility -- and the territory doesn't come with utility labels attached.

However, there's another source of possible misunderstanding here: my mental model of the brain includes distinct systems for utility and disutility -- what I usually refer to as the pain brain and gain brain. The gain brain governs approach to things you want, while the pain brain governs avoidance of things you don't want.

In theory, you don't need anything this complex - you could just have a single utility function to squeeze your futures with. But in practice, we have these systems for historical reasons: an animal works differently depending on whether it's chasing something or being chased.

What we call "unhappiness" is not merely the absence of happiness, it's the activation of the "pain-avoidance" system -- a system that's largely superfluous (given our now-greater reasoning capacity) unless you're actually being chased by something.

So, from my perspective, it's irrational to maintain any belief that has the effect of activating the the pain brain in situations that don't require an urgent, "this is a real emergency" type of response. In all other kinds of situations, pain-brain responses are less useful because they are:

  • more emotional
  • more urgent and stressful
  • less deep thinking
  • less creativity and willingness to explore options
  • less risk-taking

And while these characteristics could potentially be life-saving in a truly urgent emergency... they are pretty much life-destroying in all other contexts.

So, while you might have a preference that people not be religious (for example), there is no need for this preference not being met, to cause you any actual unhappiness.

In other words, you can be happy about a condition X being met in reality, without also requiring that you be unhappy when condition X is not met.

Comment author: Eliezer_Yudkowsky 12 March 2009 08:18:47PM 11 points [-]

Should I not be unhappy when people die? I know that I could, by altering my thought processes, make myself less unhappy; I know that this unhappiness is not cognitively unavoidable. I choose not to avoid it. The person I aspire to be has conditions for unhappiness and will be unhappy when those conditions are met.

Our society thinks that being unhappy is terribly, terribly sinful. I disagree morally, pragmatically, and furthermore think that this belief leads to a great deal of unhappiness.

(My detailed responses being given in Feeling Rational, Not For the Sake of Happiness Alone, and Serious Stories, and furthermore illustrated in Three Worlds Collide.)

Comment author: pjeby 12 March 2009 08:51:56PM 0 points [-]

Argh. You keep editing your comments after I've already started my replies. I guess I'll need to wait longer before replying, in future.

Your detailed responses are off-point, though, except for "Serious Stories", in which you suggest that it would be useful to get rid of unnecessary and soul-crushing pain and/or sorrow. My position is that a considerable portion of that unnecessary and soul-crushing stuff can be done away with, merely by rational examination of the emotional source of your beliefs in the relevant context.

Specifically, how do you know what "person you aspire to be"? My guess: you aspire to be that person, not because of an actual aspiration, but rather because you are repulsed by the alternative, and that the alternative is something you're either afraid you are, or might easily become. (In other words, a 100% standard form of irrationality known as an "ideal-belief-reality conflict".)

What's more, when you examine how you came to believe that, you will find one or more specific emotional experiences... which, upon further consideration, you will find you gave too much weight to, due to their emotional content at the time.

Now, you might not be as eager to examine this set of beliefs as you were to squirt ice water in your ear, but I have a much higher confidence that the result will be more useful to you. ;-)

Comment author: Eliezer_Yudkowsky 12 March 2009 09:02:51PM 7 points [-]

By "person I aspire to be" I mean that my present self has this property and my present self wants my future self to have this property. I originally wrote "person I define as me" but that seemed like too much of a copout.

Yes, I'm repulsed by imagining the alternative Eliezer who feels no pain when his friends, family, or a stranger in another country dies. It is not clear to me why you feel this is irrational. Nor is it based on any particular emotional experience of mine of having ever been a sociopath.

It seems to me that you are verging here on the failure mode of having psychoanalysis the way that some people have bad breath. If you don't like my arguments, argue otherwise. Just casting strange hints of childhood trauma is... well, it's having psychoanalysis the way some people have bad breath.

So far as I can tell, being a person who hurts when other people hurt is part of that which appears to me from the inside as shouldness.

Comment author: pjeby 12 March 2009 09:16:32PM 0 points [-]

Okay, let me rephrase. Why is it better to be a person who hurts when other people hurt, than a person who is happier when people don't hurt?

Comment author: thomblake 12 March 2009 09:22:16PM 1 point [-]

While EY might not put it this way, this line:

So far as I can tell, being a person who hurts when other people hurt is part of that which appears to me from the inside as shouldness.

answered your question

Okay, let me rephrase. Why is it better to be a person who hurts when other people hurt, than a person who is happier when people don't hurt?

since Eliezer was making a moral observation. The answer: It is obviously so. Do you have conflicting observational data?

Comment author: pjeby 12 March 2009 09:27:40PM -1 points [-]

How is it rational to treat a "moral observation" as "obviously so"? That's how religion works, isn't it?

Comment author: Eliezer_Yudkowsky 12 March 2009 09:30:10PM 2 points [-]

This discussion is now about

 NATURALISTIC METAETHICS

my view on which is summarized in Joy in the Merely Good.

Comment author: thomblake 12 March 2009 09:30:12PM 1 point [-]

I'm not aware of religions that work that way.

However, that's how observation works.

How is it rational to treat an observation as not obviously so? I'm pretty sure that's inconsistent, if not contradictory.

Comment author: pjeby 12 March 2009 08:36:26PM 0 points [-]

I don't know. Is it useful for you to be unhappy when people die? For how long? How will you know when you've been sufficiently unhappy? What bad thing will happen if you're not unhappy when people die? What good thing happens if you are unhappy?

And I mean these questions specifically: not "what's good about being unhappy in general?" or "what's good about being unhappy when people die, from an evolutionary perspective?", but why do YOU, specifically, think it's a good thing for YOU to be unhappy when some one specific person dies?

My hypothesis: your examination will find that the idea of not being unhappy in this situation is itself provoking unhappiness. That is, you think you should be unhappy when someone dies, because the idea of not being unhappy will make you unhappy also.

The next question to ask will then be what, specifically, you expect to happen in response to that lack of unhappiness, that will cause you to be unhappy.

And at that point, you will discover something interesting: an assumption that you weren't aware of before.

So, if you believe that your unhappiness should match the facts, it would be a good idea to find out what facts your map is based on, because "death => unhappiness" is not labeled on the territory.

Comment author: Eliezer_Yudkowsky 12 March 2009 08:45:32PM 4 points [-]

Pjeby, I'm unhappy on certain conditions as a terminal value, not because I expect any particular future consequences from it. To say that it is encoded directly into my utility function (not just that certain things are bad, but that I should be a person who feels bad about them) might be oversimplifying in this case, since we are dealing with a structurally complicated aspect of morality. But just as I don't think music is valuable without someone to listen to it, I don't think I'm as valuable if I don't feel bad about people dying.

If I knew a few other things, I think, I could build an AI that would simply act to prevent the death of sentient beings, without feeling the tiniest bit bad about it; but that AI wouldn't be what I think a sentient citizen should be, and so I would try not to make that AI sentient.

It is not my future self who would be unhappy if all his unhappiness were eliminated; it is my current self who would be unhappy on learning that my nature and goals would thus be altered.

Did you read the Fun Theory sequence and the other posts I referred you to? I'm not sure if I'm repeating myself here.

Comment author: NancyLebovitz 24 March 2010 05:50:15AM 0 points [-]

Possibly relevent: A General Theory of Love suggests that love (imprinting?) includes needing the loved one to help regulate basic body systems. It starts with the observation that humans are the only species whose babies die from isolation.


I've read a moderate number of books by Buddhists, and as far as I can tell, while a practice of meditation makes ordinary problems less distressing, it doesn't take the edge off of grief at all. It may even make grief sharper.

Comment author: pjeby 12 March 2009 09:14:45PM 0 points [-]

I'm unhappy on certain conditions as a terminal value, not because I expect any particular future consequences from it.

Really? How do you know that? What evidence would convince you that your brain is expecting particular future consequences, in order to generate the unhappiness?

I ask because my experience tells me that there are only a handful of "terminal" negative values, and they are human universals; as far as I can tell, it isn't possible for a human being to create their own terminal (negative) values. Instead, they derive intermediate negative values, and then forget how they did the derivation... following which they invent rationalizations that sound a lot like the ones they use to explain why death is a good thing.

Don't you find it interesting that you should defend this "terminal" value so strongly, without actually asking yourself the question, "What really would happen if I were not unhappy in situation X?" (Where situation X is actually specified to a level allowing sensory detail -- not some generic abstraction.)

It's clear from what you've written throughout this thread that the answer to that question is something like, "I would be a bad person." And in my experience, when you then ask something like, "And how did I learn that that would make me bad?", you'll discover specific, emotional memories that provide the only real justification you had for thinking this thought in the first place... and that it has little or no connection to the rationalizations you've attached to it.

Comment author: Eliezer_Yudkowsky 12 March 2009 09:25:59PM 3 points [-]

Really? How do you know that? What evidence would convince you that your brain is expecting particular future consequences, in order to generate the unhappiness?

You could actually tell me what I fear, and I'd recognize it when I heard it?

What would it take for me to convince you that I'm repulsed by the thing-as-it-is and not its future consequence?

I ask because my experience tells me that there are only a handful of "terminal" negative values

I strongly suspect, then, that you are too good at finding psychological explanations! Conditioned dislike is not the same as conditional dislike. We can train our terminal values, and we can be moved by arguments about them. Now, there may be a humanly universal collection of negative reinforcers, although there is not any reason to expect the collection to be small; but that is not the same thing as a humanly universal collection of terminal values.

I can tell you just exactly what would happen if I weren't unhappy: I would live happily ever afterward. I just don't find that to be the most appealing prospect I can imagine, though one could certainly do worse.

Comment author: pjeby 12 March 2009 10:08:24PM 0 points [-]

What would it take for me to convince you that I'm repulsed by the thing-as-it-is and not its future consequence?

A source listing for the relevant code and data structures in your brain. At the moment, the closest thing I know to that is examining formative experiences, because recontextualizing those experiences is the most rapid way to produce testable change in a human being.

We can train our terminal values, and we can be moved by arguments about them.

Then we mean different things by "terminal" in this context, since I'm referring here to what comes built-in to a human, versus what is learned by a human. How did you learn that you should have that particular terminal value?

I can tell you just exactly what would happen if I weren't unhappy: I would live happily ever afterward.

As far as I can tell, that's a "far" answer to a "near" question -- it sounds like the result of processing symbols in response to an abstraction, rather than one that comes from observing the raw output of your brain in response to a concrete question.

In effect, my question is, what reinforcer shapes/shaped you to believe that it would be bad to live happily ever after?

(Btw, I don't claim that happily-ever-after possible -- I just claim that it's possible and practical to reduce one's unhappiness by pruning one's negative values to those actually required to deal with urgent threats, rather than allowing them to be triggered by chronic conditions. I don't even expect that I won't grieve people important to me... but I also expect to get over it, as quickly as is practical for me to do so.)

Comment author: Annoyance 12 March 2009 08:13:20PM 3 points [-]

I agree with your reasoning, but I think there are plenty of reasons to be unhappy about religion that go beyond the absence of a preferred state.

In other words, I think I should be actively displeased that religion exists and is prevalent, not merely being non-happy. Neutrality is included in non-happiness, and if the word were used logically, unhappiness. But the way it's actually used, 'unhappy' means active displeasure.

Comment author: pjeby 12 March 2009 08:37:28PM 0 points [-]

How is this active displeasure useful to you? Does it cause you to do something different than if you merely prefer religion to not be present? What, specifically?