Our actions reveal what we actually want, not what we believe we want or believe we should want. No one chooses against their own judgment. What we do is choose against our understanding of our own judgment, and that is a far subtler matter.
More like, we have multiple parts of the brain which can reach different judgments, and rational arguments act on the wrong part. I think this is what you were getting at (and your subsequent examples seem to support that), but it should be explicit.
I think the rational (mostly linguistic) parts of our brain can influence decisions made by other parts, if we're smart about it. The main trap seems to be that when the conscious and subconscious parts of our brain disagree, we may decide that we didn't will hard enough. So we try to "will harder", which to the linguistic part of our brain means sticking the word "very" in front of everything and generating a bunch of negative self views, which has the opposite of the desired effect on our subconscious.
jimrandomh said:
More like, we have multiple parts of the brain which can reach different judgments, and rational arguments act on the wrong part. I think this is what you were getting at (and your subsequent examples seem to support that), but it should be explicit.
Exactly. I think the notion of modularity from evolutionary psychology would help to understand some types of akrasia. While consciousness is probably not a complete bystander per Annoyance, it's merely one of the mental modules in the brain. This hypothesis explains Annoyance's observation that there may be factors in our judgment that we don't understand.
If we act against what we say we want, it may not mean that we "didn't really want it," but that one part of brain wanted it while another part didn't, and the second part won.
"Want" is not always a unitary phenomenon inside the brain; neither is "judgment."
Please note: I said consciousness is ALMOST without influence. It's not completely so. The problem is that it attributes nearly everything we do it itself, instead of the few bits it actually contributes.
I think the rational (mostly linguistic) parts of our brain can influence decisions made by other parts, if we're smart about it. The main trap seems to be that when the conscious and subconscious parts of our brain disagree, we may decide that we didn't will hard enough. So we try to "will harder", which to the linguistic part of our brain means sticking the word "very" in front of everything and generating a bunch of negative self views, which has the opposite of the desired effect on our subconscious.
IAWY; it's either the opposite effect, or just no effect. You could also think of this as being a case of "the leader not listening to subordinates", in that the "try harder" mode is ignoring whatever the actual problem is -- i.e., the subconscious goal or prediction that's interfering. In my experience it's much more important to teach people to be able to listen to themselves (i.e., become aware of what they already believe/expect/desire) than to talk to themselves (i.e., push new information in).
Every time I try listening to myself, my subconscious invents me some new and "deep" explanation that I then actually believe for a day or two. It's an endless quest.
A more fruitful strategy for me was taking some minutes or hours every day to grow something new in my mind, ignoring the old stuff completely. A couple times the new stuff in me eventually grew strong enough to overthrow the old stuff for control of my life without much struggle.
Every time I try listening to myself, my subconscious invents me some new and "deep" explanation that I then actually believe for a day or two. It's an endless quest.
That's not listening, and it's not your subconscious. Your other-than-conscious mind doesn't do explanations -- heck, it doesn't even grok abstractions, except for an intuitive (and biased) sense of probabilities for a given context (external+internal state).
The type of listening I'm referring to is paying attention to autonomous responses (e.g. a flash of people laughing at you if you fail), not making up theories or explanations. It's harder to learn, but more worthwhile.
Just as your actions don't follow seamlessly from the theoretical understanding of what's the right thing to do, and just as the understanding of what is the right thing to do doesn't precisely reflect what really is the right thing to do, what you actually do doesn't always reflect what you really prefer. There are errors and biases at every step, and the role of theoretical and intuitive understanding of what you want is in implementing a stronger procedure that gets the things you care about done reliably, that fights the imperfections in the causes of your actions, and imperfections in itself, instead of using them as an excuse for failure.
Your true preference is not the ultimate source of you actions, and much less so where the cause of discrepancy with your own judgment is a known problem, one you don't accept as a part of yourself.
Not at all. It's simply to recognize the difference between the map and the territory. What we say about ourselves is the map. What we do is the territory.
If the two don't match, it's the map that needs to be updated.
What we say about ourselves is the map. What we do is the territory.
Every map is part of the territory. But what a particular map a map of? When we reason about what we want, we don't reason about what we actually do, but about what we should do.
"what you actually do doesn't always reflect what you really prefer."
It is possible for the body to be broken so that it doesn't reflect the will. But if the body is whole, its actions reflect the genuine will of the entity - to the degree that it can, of course.
If my neck is broken and my spinal cord severed, I can't say that I don't prefer to eat because I don't pick up the apple sitting on the table. I'm paralyzed. My movements (or lack thereof) are no longer connected to my will.
With me just sitting here, though, the fact that I have not picked up the apple means that I do not prefer to pick up the apple.
Our actions reveal what we actually want, not what we believe we want or believe we should want. No one chooses against their own judgment. What we do is choose against our understanding of our own judgment, and that is a far subtler matter. By our fruits shall we know ourselves.
Bravo. That's what I was trying to get at in the last part of my post.
I generally agree; in fact, when I deconverted, there were plenty of cases where I had to become acquainted with what I actually wanted and valued (as opposed to what I had been trying very hard to believe I "truly" valued), and found to my relief that my wants weren't so inimical to each other after all.
However, not all of my different states of mind partake equally of conscious thought, and I see nothing wrong with the conscious ones using rationality to achieve their ends when they conflict with the unconscious ones. It's just like setting an alarm clock because my waking self prefers getting up at a reasonable time, while my half-awake self prefers sleeping in.
But you do have to recognize that an alarm clock will induce you to become awake and get up, that it will cause your tendency to remain in bed to end.
You (presumably) don't lie awake at night, willing yourself to get up in the morning. You don't try to set your conscious against your non-conscious, your will against your will.
You just set the alarm and go to sleep.
I think we are in violent agreement. Now we must fight with knives.
I agree with the spirit of this, though of course we have a long way to go in cognitive neuroscience before we know ourselves anywhere near as well as we know the majority of our current human artifacts. However, it does seem like relatively more accurate models will help us comparatively more, most of the time. Presumably that human intelligence was able to evolve at all is some evidence in favor of this.
I don't disagree strongly with this point, but current understanding suggests that our intelligence developed in a positive feedback process of trying to anticipate others. Those who were best at anticipating and manipulating others then set the new ground competence. The hypothetically-resulting runaway loop may explain a great deal.
Annoyance, I don't disagree. The runaway loop leading to intelligence seems plausible, and it appears to support the idea that partially accurate modeling confers enough advantage to be incrementally selected .
Interesting discussion. Does knowledge drive action? Perhaps, but what type of knowledge and how much certainty is sufficient to overcome akrasia? I am reminded of Hamlet's "To be or not to be.." Act 3 scene 1 of "Hamlet." When does knowledge itself become the justification of akrasia?
Much has been written here about the issue of akrasia. People often report that they really, sincerely want to do something, that they recognize that certain courses of action are desirable/undesirable and that they should choose them -- but when the time comes to decide, they do otherwise. Their choices don't match what they said their choices would be.
While I'm sure many people are less than honest in reporting their intentions to others, and possibly even more who aren't even being honest with themselves, there are still plenty of people that are presumably sincere and honest. So how can they make their actions match their understanding of what they want? How can their choices reflect their own best judgment?
Isn't that really the wrong question?
If a model of a phenomenon fails to accurately predict it, we conclude that the model is flawed and try to change it. If what we're trying to understand is ourselves, our own choices, and the motivations, desires, and preferences that direct those choices, why should we do any differently? Our actions reveal what we actually want, not what we believe we want or believe we should want. No one chooses against their own judgment. What we do is choose against our understanding of our own judgment, and that is a far subtler matter. By our fruits shall we know ourselves.
Expecting our behavior to be constrained and controlled by our understanding is like expecting our limbs to move if we yell at them to do so. It doesn't matter how much we believe we want them to move, or how much we say we want them to move. It is irrelevant whether we have a conscious understanding of the nerves and muscles involved. Our conscious awareness is a bystander that reports what happens and attributes its observations to itself, when in actuality it controls very little at all.
There are people whose ability to move has been damaged by nerve trauma or damage to the brain. The established relationships between their intents, their desires, and the signals to their muscles, have been damaged or destroyed. Such people do not improve by talking to others about how much they want to move, or by talking to themselves about it (which is what conscious thought really is). They get better by searching out connections that work and building on them.
Babies have little if any consciousness, and they don't possess theory. Their nervous systems learn to move their bodies by bombarding their muscles with random noise triggered by their interests, and strengthening the signals that happen to get them closer to what they want. Not what they think they want. It is quite unlikely that babies have models of their minds, much less conscious ones, although they are either born with models of their bodies or the foundations for building such a model.
Those who wish to bring themselves into alignment with what is truly correct, instead of what their impulses and desires seek in themselves, must first understand the nature of their impulses and the nature of their understanding.