pjeby comments on Raising the Sanity Waterline - Less Wrong

112 Post author: Eliezer_Yudkowsky 12 March 2009 04:28AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (207)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 12 March 2009 08:18:47PM 11 points [-]

Should I not be unhappy when people die? I know that I could, by altering my thought processes, make myself less unhappy; I know that this unhappiness is not cognitively unavoidable. I choose not to avoid it. The person I aspire to be has conditions for unhappiness and will be unhappy when those conditions are met.

Our society thinks that being unhappy is terribly, terribly sinful. I disagree morally, pragmatically, and furthermore think that this belief leads to a great deal of unhappiness.

(My detailed responses being given in Feeling Rational, Not For the Sake of Happiness Alone, and Serious Stories, and furthermore illustrated in Three Worlds Collide.)

Comment author: pjeby 12 March 2009 08:51:56PM 0 points [-]

Argh. You keep editing your comments after I've already started my replies. I guess I'll need to wait longer before replying, in future.

Your detailed responses are off-point, though, except for "Serious Stories", in which you suggest that it would be useful to get rid of unnecessary and soul-crushing pain and/or sorrow. My position is that a considerable portion of that unnecessary and soul-crushing stuff can be done away with, merely by rational examination of the emotional source of your beliefs in the relevant context.

Specifically, how do you know what "person you aspire to be"? My guess: you aspire to be that person, not because of an actual aspiration, but rather because you are repulsed by the alternative, and that the alternative is something you're either afraid you are, or might easily become. (In other words, a 100% standard form of irrationality known as an "ideal-belief-reality conflict".)

What's more, when you examine how you came to believe that, you will find one or more specific emotional experiences... which, upon further consideration, you will find you gave too much weight to, due to their emotional content at the time.

Now, you might not be as eager to examine this set of beliefs as you were to squirt ice water in your ear, but I have a much higher confidence that the result will be more useful to you. ;-)

Comment author: Eliezer_Yudkowsky 12 March 2009 09:02:51PM 7 points [-]

By "person I aspire to be" I mean that my present self has this property and my present self wants my future self to have this property. I originally wrote "person I define as me" but that seemed like too much of a copout.

Yes, I'm repulsed by imagining the alternative Eliezer who feels no pain when his friends, family, or a stranger in another country dies. It is not clear to me why you feel this is irrational. Nor is it based on any particular emotional experience of mine of having ever been a sociopath.

It seems to me that you are verging here on the failure mode of having psychoanalysis the way that some people have bad breath. If you don't like my arguments, argue otherwise. Just casting strange hints of childhood trauma is... well, it's having psychoanalysis the way some people have bad breath.

So far as I can tell, being a person who hurts when other people hurt is part of that which appears to me from the inside as shouldness.

Comment author: pjeby 12 March 2009 09:16:32PM 0 points [-]

Okay, let me rephrase. Why is it better to be a person who hurts when other people hurt, than a person who is happier when people don't hurt?

Comment author: thomblake 12 March 2009 09:22:16PM 1 point [-]

While EY might not put it this way, this line:

So far as I can tell, being a person who hurts when other people hurt is part of that which appears to me from the inside as shouldness.

answered your question

Okay, let me rephrase. Why is it better to be a person who hurts when other people hurt, than a person who is happier when people don't hurt?

since Eliezer was making a moral observation. The answer: It is obviously so. Do you have conflicting observational data?

Comment author: pjeby 12 March 2009 09:27:40PM -1 points [-]

How is it rational to treat a "moral observation" as "obviously so"? That's how religion works, isn't it?

Comment author: Eliezer_Yudkowsky 12 March 2009 09:30:10PM 2 points [-]

This discussion is now about

 NATURALISTIC METAETHICS

my view on which is summarized in Joy in the Merely Good.

Comment author: pjeby 12 March 2009 09:52:21PM 2 points [-]

My question is about the implementation of meta-ethics in the human brain. If I were going to write a program to simulate Eliezer Yudkowsky, what rules (other than "be unhappy when others are unhappy") would I need to program in for you to arrive at this "obvious" conclusion?

In my personal experience, the morality that people arrive at by avoiding negative consequences is substantially different than the morality they arrive at by seeking positive ones.

In other words, a person who does good because they will otherwise be a bad person, is not the same as a person who does good because it brings good. Their actions and attitudes differ in substantive ways, besides the second person being happier. For example, the second person is far more likely to actually be generous and warm towards other people -- especially living, present, individual people, rather than "people" as an abstraction.

So which of these two is really the "good" person, from your moral perspective?

(On another level, by the way, I fail to see how contagious, persistent unhappiness is a moral good, since it greatly magnifies the total amount of unhappiness in the universe. But that's a separate issue from the implementation question.)

Comment author: thomblake 12 March 2009 10:02:02PM 1 point [-]

It seems to me that when you say 'meta-ethics' you simply mean 'ethics'. I don't know why you'd think meta-ethics would need to be implemented in the human brain. Ethics is in the world; meta-ethics doubly so. There's a fact about what's right, just like there's a fact about what's prime. You could ask why we care about what's right, but that's neither an ethical question nor a meta-ethical one. The ethical question is 'what's right?' and the meta-ethical question is 'what makes something a good answer to an ethical question?'. Both of those questions can be answered without reference to humans, though humans are the only reason why anyone would care.

Comment author: pjeby 12 March 2009 10:28:07PM 0 points [-]

Unless Eliezer has some supernatural entity to do his thinking for him, his ethics and meta-ethics require some physical implementation. Where else are you proposing that he store and process them, besides physical reality?

Comment author: thomblake 12 March 2009 09:30:12PM 1 point [-]

I'm not aware of religions that work that way.

However, that's how observation works.

How is it rational to treat an observation as not obviously so? I'm pretty sure that's inconsistent, if not contradictory.