pjeby comments on Raising the Sanity Waterline - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (207)
In order to be unhappy "about" a fact, the fact has to have some meaning... a meaning which can exist only in your map, not the territory, since the fact or its converse have to have some utility -- and the territory doesn't come with utility labels attached.
However, there's another source of possible misunderstanding here: my mental model of the brain includes distinct systems for utility and disutility -- what I usually refer to as the pain brain and gain brain. The gain brain governs approach to things you want, while the pain brain governs avoidance of things you don't want.
In theory, you don't need anything this complex - you could just have a single utility function to squeeze your futures with. But in practice, we have these systems for historical reasons: an animal works differently depending on whether it's chasing something or being chased.
What we call "unhappiness" is not merely the absence of happiness, it's the activation of the "pain-avoidance" system -- a system that's largely superfluous (given our now-greater reasoning capacity) unless you're actually being chased by something.
So, from my perspective, it's irrational to maintain any belief that has the effect of activating the the pain brain in situations that don't require an urgent, "this is a real emergency" type of response. In all other kinds of situations, pain-brain responses are less useful because they are:
And while these characteristics could potentially be life-saving in a truly urgent emergency... they are pretty much life-destroying in all other contexts.
So, while you might have a preference that people not be religious (for example), there is no need for this preference not being met, to cause you any actual unhappiness.
In other words, you can be happy about a condition X being met in reality, without also requiring that you be unhappy when condition X is not met.
Should I not be unhappy when people die? I know that I could, by altering my thought processes, make myself less unhappy; I know that this unhappiness is not cognitively unavoidable. I choose not to avoid it. The person I aspire to be has conditions for unhappiness and will be unhappy when those conditions are met.
Our society thinks that being unhappy is terribly, terribly sinful. I disagree morally, pragmatically, and furthermore think that this belief leads to a great deal of unhappiness.
(My detailed responses being given in Feeling Rational, Not For the Sake of Happiness Alone, and Serious Stories, and furthermore illustrated in Three Worlds Collide.)
Argh. You keep editing your comments after I've already started my replies. I guess I'll need to wait longer before replying, in future.
Your detailed responses are off-point, though, except for "Serious Stories", in which you suggest that it would be useful to get rid of unnecessary and soul-crushing pain and/or sorrow. My position is that a considerable portion of that unnecessary and soul-crushing stuff can be done away with, merely by rational examination of the emotional source of your beliefs in the relevant context.
Specifically, how do you know what "person you aspire to be"? My guess: you aspire to be that person, not because of an actual aspiration, but rather because you are repulsed by the alternative, and that the alternative is something you're either afraid you are, or might easily become. (In other words, a 100% standard form of irrationality known as an "ideal-belief-reality conflict".)
What's more, when you examine how you came to believe that, you will find one or more specific emotional experiences... which, upon further consideration, you will find you gave too much weight to, due to their emotional content at the time.
Now, you might not be as eager to examine this set of beliefs as you were to squirt ice water in your ear, but I have a much higher confidence that the result will be more useful to you. ;-)
By "person I aspire to be" I mean that my present self has this property and my present self wants my future self to have this property. I originally wrote "person I define as me" but that seemed like too much of a copout.
Yes, I'm repulsed by imagining the alternative Eliezer who feels no pain when his friends, family, or a stranger in another country dies. It is not clear to me why you feel this is irrational. Nor is it based on any particular emotional experience of mine of having ever been a sociopath.
It seems to me that you are verging here on the failure mode of having psychoanalysis the way that some people have bad breath. If you don't like my arguments, argue otherwise. Just casting strange hints of childhood trauma is... well, it's having psychoanalysis the way some people have bad breath.
So far as I can tell, being a person who hurts when other people hurt is part of that which appears to me from the inside as shouldness.
Okay, let me rephrase. Why is it better to be a person who hurts when other people hurt, than a person who is happier when people don't hurt?
While EY might not put it this way, this line:
answered your question
since Eliezer was making a moral observation. The answer: It is obviously so. Do you have conflicting observational data?
How is it rational to treat a "moral observation" as "obviously so"? That's how religion works, isn't it?
I'm not aware of religions that work that way.
However, that's how observation works.
How is it rational to treat an observation as not obviously so? I'm pretty sure that's inconsistent, if not contradictory.
This discussion is now about
my view on which is summarized in Joy in the Merely Good.
My question is about the implementation of meta-ethics in the human brain. If I were going to write a program to simulate Eliezer Yudkowsky, what rules (other than "be unhappy when others are unhappy") would I need to program in for you to arrive at this "obvious" conclusion?
In my personal experience, the morality that people arrive at by avoiding negative consequences is substantially different than the morality they arrive at by seeking positive ones.
In other words, a person who does good because they will otherwise be a bad person, is not the same as a person who does good because it brings good. Their actions and attitudes differ in substantive ways, besides the second person being happier. For example, the second person is far more likely to actually be generous and warm towards other people -- especially living, present, individual people, rather than "people" as an abstraction.
So which of these two is really the "good" person, from your moral perspective?
(On another level, by the way, I fail to see how contagious, persistent unhappiness is a moral good, since it greatly magnifies the total amount of unhappiness in the universe. But that's a separate issue from the implementation question.)
It seems to me that when you say 'meta-ethics' you simply mean 'ethics'. I don't know why you'd think meta-ethics would need to be implemented in the human brain. Ethics is in the world; meta-ethics doubly so. There's a fact about what's right, just like there's a fact about what's prime. You could ask why we care about what's right, but that's neither an ethical question nor a meta-ethical one. The ethical question is 'what's right?' and the meta-ethical question is 'what makes something a good answer to an ethical question?'. Both of those questions can be answered without reference to humans, though humans are the only reason why anyone would care.
I don't know. Is it useful for you to be unhappy when people die? For how long? How will you know when you've been sufficiently unhappy? What bad thing will happen if you're not unhappy when people die? What good thing happens if you are unhappy?
And I mean these questions specifically: not "what's good about being unhappy in general?" or "what's good about being unhappy when people die, from an evolutionary perspective?", but why do YOU, specifically, think it's a good thing for YOU to be unhappy when some one specific person dies?
My hypothesis: your examination will find that the idea of not being unhappy in this situation is itself provoking unhappiness. That is, you think you should be unhappy when someone dies, because the idea of not being unhappy will make you unhappy also.
The next question to ask will then be what, specifically, you expect to happen in response to that lack of unhappiness, that will cause you to be unhappy.
And at that point, you will discover something interesting: an assumption that you weren't aware of before.
So, if you believe that your unhappiness should match the facts, it would be a good idea to find out what facts your map is based on, because "death => unhappiness" is not labeled on the territory.
Pjeby, I'm unhappy on certain conditions as a terminal value, not because I expect any particular future consequences from it. To say that it is encoded directly into my utility function (not just that certain things are bad, but that I should be a person who feels bad about them) might be oversimplifying in this case, since we are dealing with a structurally complicated aspect of morality. But just as I don't think music is valuable without someone to listen to it, I don't think I'm as valuable if I don't feel bad about people dying.
If I knew a few other things, I think, I could build an AI that would simply act to prevent the death of sentient beings, without feeling the tiniest bit bad about it; but that AI wouldn't be what I think a sentient citizen should be, and so I would try not to make that AI sentient.
It is not my future self who would be unhappy if all his unhappiness were eliminated; it is my current self who would be unhappy on learning that my nature and goals would thus be altered.
Did you read the Fun Theory sequence and the other posts I referred you to? I'm not sure if I'm repeating myself here.
Possibly relevent: A General Theory of Love suggests that love (imprinting?) includes needing the loved one to help regulate basic body systems. It starts with the observation that humans are the only species whose babies die from isolation.
I've read a moderate number of books by Buddhists, and as far as I can tell, while a practice of meditation makes ordinary problems less distressing, it doesn't take the edge off of grief at all. It may even make grief sharper.
Really? How do you know that? What evidence would convince you that your brain is expecting particular future consequences, in order to generate the unhappiness?
I ask because my experience tells me that there are only a handful of "terminal" negative values, and they are human universals; as far as I can tell, it isn't possible for a human being to create their own terminal (negative) values. Instead, they derive intermediate negative values, and then forget how they did the derivation... following which they invent rationalizations that sound a lot like the ones they use to explain why death is a good thing.
Don't you find it interesting that you should defend this "terminal" value so strongly, without actually asking yourself the question, "What really would happen if I were not unhappy in situation X?" (Where situation X is actually specified to a level allowing sensory detail -- not some generic abstraction.)
It's clear from what you've written throughout this thread that the answer to that question is something like, "I would be a bad person." And in my experience, when you then ask something like, "And how did I learn that that would make me bad?", you'll discover specific, emotional memories that provide the only real justification you had for thinking this thought in the first place... and that it has little or no connection to the rationalizations you've attached to it.
You could actually tell me what I fear, and I'd recognize it when I heard it?
What would it take for me to convince you that I'm repulsed by the thing-as-it-is and not its future consequence?
I strongly suspect, then, that you are too good at finding psychological explanations! Conditioned dislike is not the same as conditional dislike. We can train our terminal values, and we can be moved by arguments about them. Now, there may be a humanly universal collection of negative reinforcers, although there is not any reason to expect the collection to be small; but that is not the same thing as a humanly universal collection of terminal values.
I can tell you just exactly what would happen if I weren't unhappy: I would live happily ever afterward. I just don't find that to be the most appealing prospect I can imagine, though one could certainly do worse.
A source listing for the relevant code and data structures in your brain. At the moment, the closest thing I know to that is examining formative experiences, because recontextualizing those experiences is the most rapid way to produce testable change in a human being.
Then we mean different things by "terminal" in this context, since I'm referring here to what comes built-in to a human, versus what is learned by a human. How did you learn that you should have that particular terminal value?
As far as I can tell, that's a "far" answer to a "near" question -- it sounds like the result of processing symbols in response to an abstraction, rather than one that comes from observing the raw output of your brain in response to a concrete question.
In effect, my question is, what reinforcer shapes/shaped you to believe that it would be bad to live happily ever after?
(Btw, I don't claim that happily-ever-after possible -- I just claim that it's possible and practical to reduce one's unhappiness by pruning one's negative values to those actually required to deal with urgent threats, rather than allowing them to be triggered by chronic conditions. I don't even expect that I won't grieve people important to me... but I also expect to get over it, as quickly as is practical for me to do so.)
I agree with your reasoning, but I think there are plenty of reasons to be unhappy about religion that go beyond the absence of a preferred state.
In other words, I think I should be actively displeased that religion exists and is prevalent, not merely being non-happy. Neutrality is included in non-happiness, and if the word were used logically, unhappiness. But the way it's actually used, 'unhappy' means active displeasure.
How is this active displeasure useful to you? Does it cause you to do something different than if you merely prefer religion to not be present? What, specifically?