Much of rationality is pattern-matching. An article on lesswrong might point out a thing to look for. Noticing this thing changes your reasoning in some way. This essay is a list of things to look for. These things are all associated, but the reader should take care not to lump them together. Each dichotomy is distinct, and although the brain will tend to abstract them into some sort of yin/yang correlated mush, in reality they have a more complicated structure; some things may be similar, but if possible, try to focus on the complex interrelationships.
Map vs. Territory
Eliezer’s sequences use this as a jump-off point for discussion of rationality.
Many thinking mistakes are map vs. territory confusions.
A map and territory mistake is a mix-up of seeming vs being.
Humans need frequent reminders that we are not omniscient.
These words could be used in different ways, but the distinction I want to point at is that of labels we put on things vs actual differences in things.
The mind projection fallacy is the fallacy of thinking a mental category (a “cluster”) is an actual property things have.
If we see something as good for one reason, we are likely to attribute other good properties to it, as if it had inherent goodness. This is called the halo effect. (If we see something as bad and infer other bad properties as a result, it is referred to as the reverse-halo effect.)
The syntax is the physical instantiation of the map. The semantics is the way we are meant to read the map; that is, the intended relationship to the territory.
Semantics vs. Pragmatics
The semantics is the literal contents of a message, whereas the pragmatics is the intended result of conveying the message.
An example of a message with no semantics and only pragmatics is a command, such as “Stop!”.
Almost no messages lack pragmatics, and for good reason. However, if you seek truth in a discussion, it is important to foster a willingness to say things with less pragmatic baggage.
Usually when we say things, we do so with some “point” which is beyond the semantics of our statement. The point is usually to build up or knock down some larger item of discussion. This is not inherently a bad thing, but has a failure mode where arguments are battles and statements are weapons, and the cleverer arguer wins.
The difference between making a map and writing a book about map-making.
A good meta-level theory helps get things right at the object level, but it is usually impossible to get things right at the meta level before before you’ve made significant progress at the object level.
Seeming vs. Being
We can only deal with how things seem, not how they are. Yet, we must strive to deal with things as they are, not as they seem.
This is yet another reminder that we are not omniscient.
If we optimize too hard for things which seem good rather than things which are good, we will get things which seem very good but which may only be somewhat good, or even bad.
The dangerous cases are the cases where you do not notice there is a distinction.
This is why humans need constant reminders that we are not omniscient.
We must take care to notice the difference between how things seem to seem, and how they actually seem.
Signal vs. Noise
Not all information is equal. It is often the case that we desire certain sorts of information and desire to ignore other sorts.
In a technical setting, this has to do with the error rate present in a communication channel; imperfections in the channel will corrupt some bits, making a need for redundancy in the message being sent.
In a social setting, this is often used to refer to the amount of good information vs irrelevant information in a discussion. For example, letting a mediocre writer add material to a group blog might increase the absolute amount of good information, yet worsen the signal-to-noise ratio.
Attention is a scarce resource; yes everyone has something to teach you, but many people are much more efficient sources of wisdom than others.
In many situations, if we can present evidence to a Bayesian agent without the agent knowing that we are being selective, we can convince the agent of anything we like. For example, if I want to convince you that smoking causes obesity, I could find many people who became obese after they started smoking.
The solution to this is for the Bayesian agent to model where the information is coming from. If you know I am selecting people based on this criteria, then you will not take it as evidence of anything, because the evidence has been cherry-picked.
Most of the information you receive is intensely filtered. Nothing comes to your attention with a good conscience.
The silent evidence problem.
Selection bias need not be the result of purposeful interference as in cherry-picking. Often, an unrelated process may hide some of the evidence needed. For example, we hear far more about successful people than unsuccessful. It is tempting to look at successful people and attempt to draw conclusion about what it takes to be successful. This approach suffers from the silent evidence problem: we also need to look at the unsuccessful people and examine what is different about the two groups.
Very often, people will say something and then that thing will be refuted. The common response to this is to claim you meant something slightly different, which is more easily defended.
We often do this without noticing, making it dangerous for thinking. It is an automatic response generated by our brains, not a conscious decision to defend ourselves from being discredited. You do this far more often than you notice. The brain fills in a false memory of what you meant without asking for permission.
Knowing that you are running on corrupted hardware should cause skepticism about the outputs of your thought-processes. Yet, too much skepticism will cause you to stumble, particularly when fast thinking is needed.
Producing a correct result plus justification is harder than producing only the correct result.
Justifications are important, but the correct result is more important.
Much of our apparent self-reflection is confabulation, generating plausible explanations after the brain spits out an answer.
Example: doing quick mental math. If you are good at this, attempting to explicitly justify every step as you go would likely slow you down.
Example: impressions formed over a long period of time. Wrong or right, it is unlikely that you can explicitly give all your reasons for the impression. Requiring your own beliefs to be justifiable would preempt impressions that require lots of experience and/or many non-obvious chains of subconscious inference.
Impressions are not beliefs and they are always useful data.
Believing X for some reason unrelated to X being true is referred to as motivated cognition.
Giving a smart person more information and more methods of argument may actually make their beliefs less accurate, because you are giving them more tools to construct clever arguments for what they want to believe.
Your actual reason for believing X determines how well your belief correlates with the truth.
If you believe X because you want to, any arguments you make for X no matter how strong they sound are devoid of informational context about X and should properly be ignored by a truth-seeker.
A lumper is a thinker who attempts to fit things into overarching patterns. A splitter is a thinker who makes as many distinctions as possible, recognizing the importance of being specific and getting the details right.
Specifically, some people want big Wikipedia and TVTropes articles that discuss many things, and others want smaller articles that discuss fewer things.
This list of nuances is a lumper attempting to think more like a splitter.
Fox vs. Hedgehog
“A fox knows many things, but a hedgehog knows One Big Thing.” Closely related to a splitter, a fox is a thinker whose strength is in a broad array of knowledge. A hedgehog is a thinker who, in contrast, has one big idea and applies it everywhere.
The fox mindset is better for making accurate judgements, according to Tetlock.
Abram Demski and Grognor
Much of rationality is pattern-matching. An article on lesswrong might point out a thing to look for. Noticing this thing changes your reasoning in some way. This essay is a list of things to look for. These things are all associated, but the reader should take care not to lump them together. Each dichotomy is distinct, and although the brain will tend to abstract them into some sort of yin/yang correlated mush, in reality they have a more complicated structure; some things may be similar, but if possible, try to focus on the complex interrelationships.