Alex_Altair comments on Meditation, insight, and rationality. (Part 1 of 3) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (120)
I did not communicate what I meant to say very well. I'll try again.
I view my utility function as a mathematical representation of all of my own preferences. My working definition of "preferences" is: conditions that, if satisfied by the universe, will cause me to feel good about the universe's state, and if unsatisfied by the universe, will cause me to feel bad about the universe's state. When I talk about "feeling good" and "feeling bad" in this context, I'm trying to refer to whatever motivation it is that causes us to try to maximize what we call "utility". I don't know a good way in english to differentiate between the emotion that is displayed, for instance when a person is self flagellating, and the emotion that causes someone to try to take down a corrupt ruler.
If I learn that some dictator ruling over some other country is torturing and killing that country's people, my internal stream of consciousness may register the statement, "That is not acceptable. What should I do to try to improve the situation of the people in that country?" That is a negative "feeling" produced by the set of preferences that I label "morality". I do not particularly want the parts of my brain that make me moral to vanish. I do not want to self modify in such a way that I will genuinely have no preference between a world where the leader of country X is committing war crimes, and a world where country X is at peace and the population is generally well off.
Should I mope around and feel terrible because the citizens of country X are being tortured? Of course not. That's unhelpful. I do not, in fact, have a positive term in my utility function for my own misery, as my earlier post, now that I've reread it, seems to imply. Rather, I have a positive term in my utility function for whatever it is that doesn't make a person a sociopath, and that was what I was trying to talk about.
I think the terminology I use for what you're talking about is simply "desire". Desire is definitely separate from, but related to, the emotions that motivate. I think failure to separate these concepts is responsible for some stereotypes of rationality (see Dr. Manhattan). So, while controlling emotions is helpful, changing my desires is effectively changing my utility function. This can get a little complicated in some areas such as procrastination, but in general I want to keep them.
This is one of the things that turns me off the most from Buddhism. I'm interested in meditation and deep introspection, but the Four Nobel Truths at the heart of Buddhism start out as:
1) All life is suffering. 2) The cause of suffering is craving. 3) Therefore we should stop craving.
If "craving" means desire, then this is horrible. But if it means something else, then I'm interested.
I agree with what you there. The problem is, "desire" is not very much different from "preference", and I think that those thingies are inextricably bound up in emotions. If you purge emotions, I think your desires would go away too, which would probably make you indistinguishable from a computer in standby mode.
This sounds like a theory that could use testing.
I think it has, in effect, with aboulia: http://en.wikipedia.org/wiki/Aboulia
"Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them..."
There's a couple bits of that description that I find interesting, in this context:
As opposed to 'lack of emotional responsiveness' or 'lack of emotions' - in other words, I suspect that the people in question are experiencing emotions, but don't feel any drive to communicate that fact.
This is less clear, but again it reads to me as saying that aboulia is not related to a lack of emotions or emotional awareness. I also note that anhedonia isn't mentioned at all in relation to it.
Minds, as we know them, are engines of optimization. They try to twist reality into a shape that we want. Imagine trying to program a computer without having a goal for the program. I think you're going to run into some challenges.
We're not in disagreement about that. But your assumption that emotions are necessary for goals to be formed is still an untested one.
There's a relevant factoid that's come up here on LW a few times before: Apparently, people with significant brain damage to their emotional centers are unable to make choices between functionally near-identical things, such as different kinds of breakfast cereal. But, interestingly, they get stuck when trying to make those choices - implying that they do attempt to e.g. acquire cereal in the first place; they're not just lying in a bed somewhere staring at the ceiling, and they don't immediately give up the quest to acquire food as unimportant when they encounter a problem.
It would be interesting to know the events that lead up to the presented situation; it would be interesting to know whether people with that kind of brain damage initiate grocery-shopping trips, for example. But even if they don't - even if the grocery trip is the result of being presented with a fairly specific list, and they do otherwise basically sit around - it seems to at least partially disprove your 'standby mode' theory, which would seem to predict that they'd just sit around even when presented with a grocery list and a request to get some shopping done.
but isn't being presented with a to-do list or alternatively feeling hungry then finding food different than 'forming goals'?
to be more precise, maybe the 'survival instinct' that leads them to seek food is not located in their emotional centers so some goals might survive regardless. but yes, the assumption is untested AFAIK.
I don't think so, but that sounds like a question of semantics to me. If you want to use a definition of 'form goals' that doesn't include 'acquire food when hungry', it's up to you to draw a coherent dividing line for it, and then we can figure out if it's relevant here.