This is the third of four short essays that say explicitly some things that I would tell an intrigued proto-rationalist before pointing them towards Rationality: AI to Zombies (and, by extension, most of LessWrong). For most people here, these essays will be very old news, as they talk about the insights that come even before the sequences. However, I've noticed recently that a number of fledgling rationalists haven't actually been exposed to all of these ideas, and there is power in saying the obvious.
This essay is cross-posted on MindingOurWay.
Your brain is a machine that builds up mutual information between its insides and its outsides. It is not only an information machine. It is not intentionally an information machine. But it is bumping into photons and air waves, and it is producing an internal map that correlates with the outer world.
However, there's something very strange going on in this information machine.
Consider: part of what your brain is doing is building a map of the world around you. This is done automatically, without much input on your part into how the internal model should look. When you look at the sky, you don't get a query which says
Readings from the retina indicate that the sky is blue. Represent sky as blue in world-model? [Y/n]
No. The sky just appears blue. That sort of information, gleaned from the environment, is baked into the map.
You can choose to claim that the sky is green, but you can't choose to see a green sky.
Most people don't identify with the part of their mind that builds the map. That part fades into the background. It's easy to forget that it exists, and pretend that the things we see are the things themselves. If you didn't think too carefully about how the brain works, you might think that brains implement people in two discrete steps: (1) build a map of the world; (2) implement a planner that uses this map to figure out how to act.
This is, of course, not at all what happens.
Because, while you can't choose to see the sky as green, you do get to choose how some parts of the world-model look. When your co-worker says "nice job, pal," you do get to decide whether or not to perceive it as a complement or an insult.
Well, kinda-sorta. It depends upon the tone and the person. Some people will automatically take it as a complement, others will automatically take it as an insult. Others will consciously dwell on it for hours, worrying. But nearly everyone experiences more conscious control over whether or not to perceive something as complementary or insulting, than whether or not to perceive the sky as blue or green.
This is intensely weird as a mind design, when you think about it. Why is the executive process responsible for choosing what to do also able to modify the world-model? Furthermore, WHY IS THE EXECUTIVE PROCESS RESPONSIBLE FOR CHOOSING WHAT TO DO ALSO ABLE TO MODIFY THE WORLD-MODEL? This is just obviously going to lead to horrible cognitive dissonance, self-deception, and bias! AAAAAAARGH.
There are "reasons" for this, of course. We can look at the evolutionary history of human brains and get hints as to why the design works like this. A brain has a pretty direct link to the color of the sky, whereas it has a very indirect link on the intentions of others. It makes sense that one of these would be set automatically, while the other would require quite a bit of processing. And it kinda makes sense that the executive control process gets to affect the expensive computations but not the cheap ones (especially if the executive control functionality originally rose to prominence as some sort of priority-aware computational expedient).
But from the perspective of a mind designer, it's bonkers. The world-model-generator isn't hooked up directly to reality! We occasionally get to choose how parts of the world-model look! We, the tribal monkeys known for self-deception and propensity to be manipulated, get a say on how the information engine builds the thing which is supposed to correspond to reality!
(I struggle with the word "we" in this context, because I don't have words that differentiate between the broad-sense "me" which builds a map of the world in which the sky is blue, and the narrow-sense "me" which doesn't get to choose to see a green sky. I desperately want to shatter the word "me" into many words, but these discussions already have too much jargon, and I have to pick my battles.)
We know a bit about how machines can generate mutual information, you see, and one of the things we know is that in order to build something that sees the sky as the appropriate color, the "sky-color" output should not be connected to an arbitrary monkey answering a multiple choice question under peer pressure, but should rather be connected directly to the sky-sensors.
And sometimes the brain does this. Sometimes it just friggin' puts a blue sky in the world-model. But other times, for one reason or another, it tosses queries up to conscious control.
Questions like "is the sky blue?" and "did my co-worker intend that as an insult?" are of the same type, and yet one we get input on, and the other we don't. The brain automatically builds huge swaths of the map, but important features of it are left up to us.
Which is worrying, because most of us aren't exactly natural-born masters of information theory. This is where rationality training comes in.
Sometimes we get conscious control over the world-model because the questions are hard. Executive control isn't needed in order to decide what color the sky is, but it is often necessary in order to deduce complex things (like the motivations of other monkeys) from sparse observations. Studying human rationality can improve your ability to generate more accurate answers when executive-controller-you is called upon to fill in features of the world-model that subconscious-you could not deduce automatically: filling in the mental map accurately is a skill that, like any skill, can be trained and honed.
Which almost makes it seem like it's ok for us to have conscious control over the world model. It almost makes it seem fine to let humans control what color they see the sky: after all, they could always choose to leave their perception of the sky linked up to the actual sky.
Except, you and I both know how that would end. Can you imagine what would happen if humans actually got to choose what color to perceive the sky as, in the same way they get to choose what to believe about the loyalty of their lovers, the honor of their tribe, the existence of their deities?
About six seconds later, people would start disagreeing about the color of the freeking sky (because who says that those biased sky-sensors are the final authority?) They'd immediately split along tribal lines and start murdering each other. Then, after things calmed down a bit, everyone would start claiming that because people get to choose whatever sky color they want, and because different people have different favorite colors, there's no true sky-color. Color is subjective, anyway; it's all just in our heads. If you tried to suggest just hooking sky-perception up to the sky-sensors, you'd probably wind up somewhere between dead and mocked, depending on your time period.
The sane response, upon realizing that internal color-of-the-sky is determined not by the sky-sensors, but by a tribal monkey-mind prone to politicking and groupthink is to scream in horror and then directly re-attach the world-model-generator to reality as quickly as possible. If your mind gave you a little pop-up message reading
For political reasons, it is now possible to disconnect your color-perception from your retinas and let peer pressure determine what colors to see. Proceed? [Y/n]
then the sane response, if you are a human mind, is a slightly panicked "uh, thanks but no thanKs I'd like to pLeASE LEAVE THE WORLD-MODEL GENERATOR HOOKED UP TO REALITY PLEASE."
But unfortunately, these occasions don't feel like pop-up windows. They don't even feel like choices. They're usually automatic, and they barely happen at the level of consciousness. Your world-model gets disconnected from reality every time that you automatically find reasons to ignore evidence which conflicts with the way you want the world to be (because it comes from someone who is obviously wrong!); every time you find excuses to disregard observations (that study was poorly designed!); every time you find reasons to stop searching for more data as soon as you've found the answer you like (because what would be the point of wasting time by searching further?)
Somehow, tribal social monkeys have found themselves in control of part of their world-models. But they don't feel like they're controlling a world-model, they feel like they're right.
You yourself are part of the pathway between reality and your map of it, part of a fragile link between what is, and what is believed. And if you let your guard down, even for one moment, it is incredibly easy to flinch and shatter that ephemeral correspondence.
The thing is, some people are really adept at perceiving (and manipulating) "social reality", as you put it. (Think politicians and salesmen, to name but a few.) Furthermore, this perception of "social reality" appears to occur in large part through "intuition"; things like body language, tone of voice, etc. all play a role, and these things are more or less evaluated unconsciously. It's not just the really adept people that do this, either; all neurotypical people perform this sort of unconscious evaluation to some extent. In that respect, at least, the way we perceive "social reality" is remarkably similar to the way we perceive "physical reality". That makes sense, too; the important tasks (from an evolutionary perspective) need to be automated in your brain, but the less important ones (like doing math, for example) require conscious control. So in my opinion, reading social cues would be an example of (in So8res' terminology) "leaving the world-model-generator hooked up to (social) reality".
However, we do in fact have a control group (or would that be experimental group?) for what happens when you attach the "world-model generator" to conscious thought: people with Asperger's Syndrome, for instance, are far less capable of picking up social cues and reading the general flow of the situation. (Writing this as someone who has Asperger's Syndrome, I should note that I'm speaking largely from personal experience here.) For them, the art of reading social situations needs to be learned pretty much from scratch, all at the level of conscious introspection. They don't have the benefit of automated, unconscious social evaluation software that just activates; instead, every decision has to be "calculated", so to speak. You'll note that the results are quite telling: people with Asperger's do significantly worse in day-to-day social interactions than neurotypical people, even after they've been "learning" how to navigate social interactions for quite some time.
In short, manual control is hard to wield, and we should be wary of letting our models be influenced by it. (There's also all the biases that humans suffer from that make it even more difficult to build accurate world-models.) Unfortunately, there's no real way to switch everything to "unconscious mode", so instead, we should strive to be rational so we can build the best models we can with our available information. That, I think, is So8res' point in this post. (If I'm mistaken, he should feel free to correct me.)
I agree that a neurotypical sees social cues on the perceptual level in much the same way as they recognize some photons as coming from "the sky" on the perceptual level. I think my complaint is that the question of "is my coworker complimenting or insulting me?" is operating on a higher level of abstraction, and has a strategic and tactical component. Even if your coworker has cued thei... (read more)