Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

SoullessAutomaton comments on Cached Selves - Less Wrong

174 Post author: AnnaSalamon 22 March 2009 07:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (75)

You are viewing a single comment's thread.

Comment author: SoullessAutomaton 23 March 2009 02:57:55AM 4 points [-]

This reminds me heavily of some studies I've read about pathological cases involving, e.g., split-brain patients or those with right-hemisphere injuries, wherein the patient will rationalize things they have no concious control over. For instance, the phenomenon of Anosognosia as mentioned in this Less Wrong post.

The most parsimonious hypothesis seems, to me at least, that long-term memory uses extremely lossy compression, recording only rough sketches of experience and action, and that causal relationships and motivations are actually reconstructed on the fly by the rationalization systems of the brain. This also fits well with this post about changing opinions on Overcoming Bias.

I think I have seen a similar argument made in a research paper on cognitive neuroscience or some related field, but I can't seem to find it.

As someone (Heinlein, I think?) said: "Man is not a rational animal, he is a rationalizing animal."

Comment author: pjeby 23 March 2009 03:36:53AM *  14 points [-]

I think it's more accurate to say that memory is not for remembering things. Memory is for making predictions of the future, so our brains are not optimized for remembering exact sequences of events, only the historical probability of successive events, at varying levels of abstraction. (E.g. pattern X is followed by pattern Y 80% of the time).

This is pretty easy to see when you add in the fact that emotionally-significant events involving pleasure or pain are also more readily recalled; in a sense, they're given uneven weight in the probability distribution.

This simple probability-driven system is enough to drive most of our actions, while the verbal system is used mainly to rationalize our actions to ourselves and others. The only difference between us and split-brain or anosognosiacs, is that we don't rationalize our externalities as much.... but we still rationalize our own thoughts and actions, in order to perpetuate the idea that we DO control them. (When in fact, we mostly don't even control our thoughts, let alone our actions.)

Anyway, the prediction basis is why it's so hard to remember if you locked the door this time -- your brain really only cares if you usually lock the door, not whether you did it this time. (Unless there was also something unusual that happened when you locked the door this time.)

Comment author: SoullessAutomaton 23 March 2009 11:19:07AM 2 points [-]

You make an excellent point here, I think. It seems clear that we actually remember far less, and in less detail, than we think we do.

but we still rationalize our own thoughts and actions, in order to perpetuate the idea that we DO control them. (When in fact, we mostly don't even control our thoughts, let alone our actions.)

However, I'm not sure I agree with the connotations on "in order to perpetuate" here. The evidence seems to me to indicate that the rationalization systems are a subconciously automatic and necessary part of the human mind, to fill in the gaps of memory and experience.

The question is, as rationalists, what can we do to counteract the known failure modes of this system? The techniques outlined in the main post here are a good start, at least.

Comment author: Eliezer_Yudkowsky 23 March 2009 11:21:15AM 4 points [-]

Not to mention that evolution is not going to design a system in order to make you feel like you are in control. That's more psychoanalytic than evolutionary-biological, I should think.

Comment author: pjeby 23 March 2009 04:13:42PM 3 points [-]

Sorry, I should've been clearer: it's in order to perpetuate this illusion to OTHER people, not to yourself. Using Robin's terms of "near" and "far", the function of verbal rationalization is to convince other people that you're actually making decisions based on the socially-shared "far" values, rather than your personal "near" values.

The fact that this rationalization also deludes you is only useful to evolution insofar as it makes you more convincing.

If language initially evolved as a command system, where hearing words triggered motor actions directly (and there is some evidence for this), then it's likely you'd end up with an arms race of people exploiting others verbally, and needing defenses against the exploits. Liars and persuaders, creating the need for disbelief, skepticism, and automatic awareness of self-interested (or ally-interested) reasoning.

But this entire "persuasion war" could (and very likely did) operate largely independent of the existing ("near") decision-making facilities. You'd only evolve a connection between the Savant and the Gossip (as I call the two intelligences) to the extent that you need near-mode information for the Gossip to do its job.

But... get this: part of the Gossip's arsenal is its social projection system... the ability to infer attitudes based on behavior. Simply self-applying that system, i.e., observing your own actions and experience, and then "rationalizing" them, gives you an illusion of personal consciousness and free will.

And it even gives you attribution error, simply because you have more data available about what was happening at the time the Savant actually made a decision.... even though the real reasons behind the Savant's choice may be completely opaque to you.

So, psychoanalysis gets repression completely wrong. As you say, evolution doesn't care what you think or feel. It's all about being able to "spin" things to others, and to do that, your Gossip operates on a need-to-know basis: it only bothers to ask the Savant for data that will help its socially-motivated reasoning.

Most of what I teach people to do -- and hell, much of what you teach people to do -- is basically about training the Gossip to ask the Savant better questions... and more importantly, getting it to actually pay attention to the answers, instead of confabulating its own.

Comment author: Emile 23 March 2009 05:22:39PM 0 points [-]

Minor terminology quibble: I'm not very fond of the terms "savant" and "gossip", I can't really tell "which is which" :P

Savant = near mind, "subconscious mind", "horse brain"/"robot brain" (using your other terminology)

Gossip = far mind, "conscious mind", "monkey brain"

... though I also see the problem with re-using more accepted terms like "subconscious mind" - people already have a lot of ideas of what those mean, so starting with new terminology can work better.

Comment author: pjeby 23 March 2009 06:56:32PM 0 points [-]

I can't really tell "which is which"

Yeah, when I officially write these up, I'll describe them as characters... thereby reusing the Gossip's "character recognition" technology. ;-) That is, I'll tell a story or two illustrating their respective characters.

Or maybe I'll just borrow the story of Rain Man, since Dustin Hoffman played an autistic savant, and Tom Cruise played a very status-oriented (i.e. "gossipy") individual, and the story was about the Gossip learning to appreciate and pay attention to the Savant. ;-)

... though I also see the problem with re-using more accepted terms like "subconscious mind" - people already have a lot of ideas of what those mean, so starting with new terminology can work better.

Right, and the same thing goes for left/right brain, etc. What's more, terms like Savant and Gossip can retain their conceptual and functional meaning even as we improve our anatomical understanding of where these functions are located. Really, for purposes of using the brain, it doesn't ordinarily matter where each function is located, only that you be able to tell which ones you're using, so you can learn to use the ones that work for the kinds of thinking you want to do.

Comment author: GuySrinivasan 23 March 2009 04:53:13AM 2 points [-]

This idea feels very, very true to me, and I am surprised I can't remember seeing it before. Do you have any cites I should read to squash my "what if it's a just-so story" feelings?

Comment author: pjeby 23 March 2009 03:49:32PM 0 points [-]

Um, anything ever written about how human memory works? ;-) (I assume you're referring to the idea that "memory is not for remembering things". The idea is just my own way of making sense of the "flaws" in human memory... and realizing that they aren't flaws at all, from evolution's point of view.)

Comment author: steven0461 23 March 2009 03:52:00PM *  2 points [-]

It shouldn't theoretically be the case that false beliefs lead to better predictions than true beliefs, so I guess when memory doesn't optimize for accuracy, there has to be a different bias that it's canceling out?

(edited to add something that needs to be said from time to time: when I say "theoretically" I don't mean "according to the correct theory", but "according to a simple and salient theory that isn't exactly right")

Comment author: pjeby 23 March 2009 04:27:56PM 1 point [-]

False beliefs lead to better predictions if they keep you safe. The probability of being attacked by a crocodile at the riverbank might be low, but this doesn't mean you shouldn't act as if you're going to be attacked.

Perhaps I should have emphasized the part where the predictions are for the purpose of making decisions. Really, you could say that memory IS a decision-making system, or at least a decision-support database. What we store for later recall, and what we recall, are based on what evolutionarily "works", rather than on theoretically-correct probabilities. Evolution is a biased Bayesian, because some probabilities matter more than others.

Comment author: Vladimir_Nesov 23 March 2009 06:13:12PM 1 point [-]

You may afford forgetting about the sky's color and may not afford forgetting about poisonous snakes, but that doesn't mean you should increase your probability estimate of encountering a poisonous snake, or that you should decrease the probability of the sky being blue. Some parts of the map are known to have different importance, but that doesn't make it a good idea to systematically distort the picture.

Comment author: pjeby 23 March 2009 06:50:28PM 2 points [-]

Er, what does "should" mean, here? My comments in this thread are about how brains actually work, not how we might prefer them to work.

Bear in mind that evolution doesn't get to do "should" - it does "what works now". If you have to evolve a working system, it's easier to start by using memory as a direct activation system. To consider probabilities in the way you seem to be describing, you have to have something that then evaluates those probabilities. It's a lot simpler to build a single mechanism that incorporates both the probabilities and the decision-making strategy, all rolled into one.

Comment author: Vladimir_Nesov 23 March 2009 08:10:50PM *  1 point [-]

Sure, but in this case you can't easily interpret that strange combined decision-making mechanism in terms of probabilities. Probabilities-utilities is a mathematical model that we understand, unlike the semantics of brain's workings. The model can be used explicitly to correct the intuitively drawn decisions, so it's a good idea to at least intuitively learn to interface between these modes.

In conclusion, the "should" refers to how you should strive to interpret your memory in terms of probabilities. If you know that in certain situations you are overvaluing the probabilities of events, you should try to correct for the bias. If your mind tells "often!", and you know that in situations like this your mind lies, then "often!" means rarely.

Comment author: pjeby 23 March 2009 09:48:52PM 1 point [-]

Probabilities-utilities is a mathematical model that we understand, unlike the semantics of brain's workings.

I prefer to work harder on understanding the brain's semantics, since we don't really have the option of replacing them at the moment.

In conclusion, the "should" refers to how you should strive to interpret your memory in terms of probabilities.

That makes it sound like I have a choice. In practice, only when I have time to reflect, do I have the option of "interpreting" my memory.

Under normal circumstances, we act in ways that are directly determined by the contents of our memories, without any intermediary. It's only the verbal rationalizations of the Gossip that make it sound like we could have chosen differently.

Thus, I benefit more from altering the memories that generate my actions, in order to produce the desired behaviors automatically.... instead of trying to run every experience in my life through a "rational" filtering process.

If your mind tells "often!", and you know that in situations like this your mind lies, then "often!" means rarely.

That's only relevant insofar as how it relates to my choice of actions. I don't care what "right" is - I care what the right thing to do is. So in that at least, I agree with my brain. ;-)

But my care is more for what goes in, and changing what's currently stored, than for trying to correct things on the fly as they come out. The way most of our biases manifest, they affect what goes into the cache, more than they affect what comes out. And that means we have the option of implementing "software patches" for the bugs the hardware introduces, instead of needing to do manual workarounds, or wait for a hardware upgrade capability.