GuySrinivasan comments on Cached Selves - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (75)
This reminds me heavily of some studies I've read about pathological cases involving, e.g., split-brain patients or those with right-hemisphere injuries, wherein the patient will rationalize things they have no concious control over. For instance, the phenomenon of Anosognosia as mentioned in this Less Wrong post.
The most parsimonious hypothesis seems, to me at least, that long-term memory uses extremely lossy compression, recording only rough sketches of experience and action, and that causal relationships and motivations are actually reconstructed on the fly by the rationalization systems of the brain. This also fits well with this post about changing opinions on Overcoming Bias.
I think I have seen a similar argument made in a research paper on cognitive neuroscience or some related field, but I can't seem to find it.
As someone (Heinlein, I think?) said: "Man is not a rational animal, he is a rationalizing animal."
I think it's more accurate to say that memory is not for remembering things. Memory is for making predictions of the future, so our brains are not optimized for remembering exact sequences of events, only the historical probability of successive events, at varying levels of abstraction. (E.g. pattern X is followed by pattern Y 80% of the time).
This is pretty easy to see when you add in the fact that emotionally-significant events involving pleasure or pain are also more readily recalled; in a sense, they're given uneven weight in the probability distribution.
This simple probability-driven system is enough to drive most of our actions, while the verbal system is used mainly to rationalize our actions to ourselves and others. The only difference between us and split-brain or anosognosiacs, is that we don't rationalize our externalities as much.... but we still rationalize our own thoughts and actions, in order to perpetuate the idea that we DO control them. (When in fact, we mostly don't even control our thoughts, let alone our actions.)
Anyway, the prediction basis is why it's so hard to remember if you locked the door this time -- your brain really only cares if you usually lock the door, not whether you did it this time. (Unless there was also something unusual that happened when you locked the door this time.)
This idea feels very, very true to me, and I am surprised I can't remember seeing it before. Do you have any cites I should read to squash my "what if it's a just-so story" feelings?
Um, anything ever written about how human memory works? ;-) (I assume you're referring to the idea that "memory is not for remembering things". The idea is just my own way of making sense of the "flaws" in human memory... and realizing that they aren't flaws at all, from evolution's point of view.)
It shouldn't theoretically be the case that false beliefs lead to better predictions than true beliefs, so I guess when memory doesn't optimize for accuracy, there has to be a different bias that it's canceling out?
(edited to add something that needs to be said from time to time: when I say "theoretically" I don't mean "according to the correct theory", but "according to a simple and salient theory that isn't exactly right")
False beliefs lead to better predictions if they keep you safe. The probability of being attacked by a crocodile at the riverbank might be low, but this doesn't mean you shouldn't act as if you're going to be attacked.
Perhaps I should have emphasized the part where the predictions are for the purpose of making decisions. Really, you could say that memory IS a decision-making system, or at least a decision-support database. What we store for later recall, and what we recall, are based on what evolutionarily "works", rather than on theoretically-correct probabilities. Evolution is a biased Bayesian, because some probabilities matter more than others.
You may afford forgetting about the sky's color and may not afford forgetting about poisonous snakes, but that doesn't mean you should increase your probability estimate of encountering a poisonous snake, or that you should decrease the probability of the sky being blue. Some parts of the map are known to have different importance, but that doesn't make it a good idea to systematically distort the picture.
Er, what does "should" mean, here? My comments in this thread are about how brains actually work, not how we might prefer them to work.
Bear in mind that evolution doesn't get to do "should" - it does "what works now". If you have to evolve a working system, it's easier to start by using memory as a direct activation system. To consider probabilities in the way you seem to be describing, you have to have something that then evaluates those probabilities. It's a lot simpler to build a single mechanism that incorporates both the probabilities and the decision-making strategy, all rolled into one.
Sure, but in this case you can't easily interpret that strange combined decision-making mechanism in terms of probabilities. Probabilities-utilities is a mathematical model that we understand, unlike the semantics of brain's workings. The model can be used explicitly to correct the intuitively drawn decisions, so it's a good idea to at least intuitively learn to interface between these modes.
In conclusion, the "should" refers to how you should strive to interpret your memory in terms of probabilities. If you know that in certain situations you are overvaluing the probabilities of events, you should try to correct for the bias. If your mind tells "often!", and you know that in situations like this your mind lies, then "often!" means rarely.
I prefer to work harder on understanding the brain's semantics, since we don't really have the option of replacing them at the moment.
That makes it sound like I have a choice. In practice, only when I have time to reflect, do I have the option of "interpreting" my memory.
Under normal circumstances, we act in ways that are directly determined by the contents of our memories, without any intermediary. It's only the verbal rationalizations of the Gossip that make it sound like we could have chosen differently.
Thus, I benefit more from altering the memories that generate my actions, in order to produce the desired behaviors automatically.... instead of trying to run every experience in my life through a "rational" filtering process.
That's only relevant insofar as how it relates to my choice of actions. I don't care what "right" is - I care what the right thing to do is. So in that at least, I agree with my brain. ;-)
But my care is more for what goes in, and changing what's currently stored, than for trying to correct things on the fly as they come out. The way most of our biases manifest, they affect what goes into the cache, more than they affect what comes out. And that means we have the option of implementing "software patches" for the bugs the hardware introduces, instead of needing to do manual workarounds, or wait for a hardware upgrade capability.