Or: “I don’t want to think about that! I might be left with mistaken beliefs!”
Related to: Rationality as memetic immune disorder; Incremental progress and the valley; Egan's Law.
tl;dr: Many of us hesitate to trust explicit reasoning because... we haven’t built the skills that make such reasoning trustworthy. Some simple strategies can help.
Most of us are afraid to think fully about certain subjects.
Sometimes, we avert our eyes for fear of unpleasant conclusions. (“What if it’s my fault? What if I’m not good enough?”)
But other times, oddly enough, we avert our eyes for fear of inaccurate conclusions.[1] People fear questioning their religion, lest they disbelieve and become damned. People fear questioning their “don't walk alone at night” safety strategy, lest they venture into danger. And I find I hesitate when pondering Pascal’s wager, infinite ethics, the Simulation argument, and whether I’m a Boltzmann brain... because I’m afraid of losing my bearings, and believing mistaken things.
Ostrich Theory, one might call it. Or I’m Already Right theory. The theory that we’re more likely to act sensibly if we don’t think further, than if we do. Sometimes Ostrich Theories are unconsciously held; one just wordlessly backs away from certain thoughts. Other times full or partial Ostrich Theories are put forth explicitly, as in Phil Goetz’s post, this LW comment, discussions of Tetlock's "foxes vs hedgehogs" research, enjoinders to use "outside views", enjoinders not to second-guess expert systems, and cautions for Christians against “clever arguments”.
Explicit reasoning is often nuts
Ostrich Theories sound implausible: why would not thinking through an issue make our actions better? And yet examples abound of folks whose theories and theorizing (as contrasted with their habits, wordless intuitions, and unarticulated responses to social pressures or their own emotions) made significant chunks of their actions worse. Examples include, among many others:
- Most early Communists;
- Ted Kaczynski (The Unabomber; an IQ 160 math PhD who wrote an interesting treatise about the human impacts of technology, and also murdered innocent people while accomplishing nothing);
- Mitchell Heisman;
- Folks who go to great lengths to keep kosher;
- Friends of mine who’ve gone to great lengths to be meticulously denotationally honest, including refusing jobs that required a government loyalty oath, and refusing to click on user agreements for videogames; and
- Many who’ve gone to war for the sake of religion, national identity, or many different far-mode ideals.
In fact, the examples of religion and war suggest that the trouble with, say, Kaczynski wasn’t that his beliefs were unusually crazy. The trouble was that his beliefs were an ordinary amount of crazy, and he was unusually prone to acting on his beliefs. If the average person started to actually act on their nominal, verbal, explicit beliefs, they, too, would in many cases look plumb nuts. For example, a Christian might give away all their possessions, rejoice at the death of their children in circumstances where they seem likely to have gone to heaven, and generally treat their chances of Heaven vs Hell as their top priority. Someone else might risk their life-savings betting on an election outcome or business about which they were “99% confident”.
That is: many peoples’ abstract reasoning is not up to the task of day to day decision-making. This doesn't impair folks' actions all that much, because peoples' abstract reasoning has little bearing on our actual actions. Mostly we just find ourselves doing things (out of habit, emotional inclination, or social copying) and make up the reasons post-hoc. But when we do try to choose actions from theory, the results are far from reliably helpful -- and so many folks' early steps toward rationality go unrewarded.
We are left with two linked barriers to rationality: (1) nutty abstract reasoning; and (2) fears of reasoned nuttiness, and other failures to believe that thinking things through is actually helpful.[2]
Reasoning can be made less risky
Much of this nuttiness is unnecessary. There are learnable skills that can both make our abstract reasoning more trustworthy and also make it easier for us to trust it.
Here's the basic idea:
If you know the limitations of a pattern of reasoning, learning better what it says won’t hurt you. It’s like having a friend who’s often wrong. If you don’t know your friend’s limitations, his advice might harm you. But once you do know, you don’t have to gag him; you can listen to what he says, and then take it with a grain of salt.[3]
Reasoning is the meta-tool that lets us figure out what methods of inference are trustworthy where. Reason lets us look over the track records of our own explicit theorizing, outside experts' views, our near-mode intuitions, etc. and figure out which is how trustworthy in a given situation.
If we learn to use this meta-tool, we can walk into rationality without fear.
Skills for safer reasoning
1. Recognize implicit knowledge.
Recognize when your habits, or outside customs, are likely to work better than your reasoned-from-scratch best guesses. Notice how different groups act and what results they get. Take pains to stay aware of your own anticipations, especially in cases where you have explicit verbal models that might block your anticipations from view. And, by studying track records, get a sense of which prediction methods are trustworthy where.
Use track records; don't assume that just because folks' justifications are incoherent, the actions they are justifying are foolish. But also don't assume that tradition is better than your models. Be empirical.
2. Plan for errors in your best-guess models.
We tend to be overconfident in our own beliefs, to overestimate the probability of conjunctions (such as multi-part reasoning chains), and to search preferentially for evidence that we’re right. Put these facts together, and theories folks are "almost certain" of turn out to be wrong pretty often. Therefore:
- Make predictions from as many angles as possible, to build redundancy. Use multiple theoretical frameworks, multiple datasets, multiple experts, multiple disciplines.
- When some lines of argument point one way and some another, don't give up or take a vote. Instead, notice that you're confused, and (while guarding against confirmation bias!) seek follow-up information.
- Use your memories of past error to bring up honest curiosity and fear of error. Then, really search for evidence that you’re wrong, the same way you'd search if your life were being bet on someone else's theory.
- Build safeguards, alternatives, and repurposable resources into your plans.
3. Beware rapid belief changes.
Some people find their beliefs changing rapidly back and forth, based for example on the particular lines of argument they're currently pondering, or the beliefs of those they've recently read or talked to. Such fluctuations are generally bad news for both the accuracy of your beliefs and the usefulness of your actions. If this is your situation:
- Remember that accurate beliefs come from an even, long-term collection of all the available evidence, with no extra weight for arguments presently in front of one. Thus, they shouldn't fluctuate dramatically back and forth; you should never be able to predict which way your future probabilities will move.
- If you can predict what you'll believe a few years from now, consider believing that already.
- Remember that if reading X-ist books will predictably move your beliefs toward X, and you know there are X-ist books out there, you should move your beliefs toward X already. Remember the Conservation of Expected Evidence more generally.
- Consider what emotions are driving the rapid fluctuations. If you’re uncomfortable ever disagreeing with your interlocutors, build comfort with disagreement. If you're uncomfortable not knowing, so that you find yourself grasping for one framework after another, build your tolerance for ambiguity, complexity, and unknowns.
4. Update your near-mode anticipations, not just your far-mode beliefs.
Sometimes your far-mode is smart and you near-mode is stupid. For example, Yvain's rationalist knows abstractly that there aren’t ghosts, but nevertheless fears them. Other times, though, your near-mode is smart and your far-mode is stupid. You might “believe” in an afterlife but retain a concrete, near-mode fear of death. You might advocate Communism but have a sinking feeling in your stomach as you conduct your tour of Stalin’s Russia.
Thus: trust abstract reasoning or concrete anticipations in different situations, according to their strengths. But, whichever one you bet your actions on, keep the other one in view. Ask it what it expects and why it expects it. Show it why you disagree (visualizing your evidence concretely, if you’re trying to talk to your wordless anticipations), and see if it finds your evidence convincing. Try to grow all your cognitive subsystems, so as to form a whole mind.
5. Use raw motivation, emotion, and behavior to determine at least part of your priorities.
One of the commonest routes to theory-driven nuttiness is to take a “goal” that isn’t your goal. Thus, folks claim to care “above all else” about their selfish well-being, the abolition of suffering, an objective Morality discoverable by superintelligence, or average utilitarian happiness-sums. They then find themselves either without motivation to pursue “their goals”, or else pulled into chains of actions that they dread and do not want.
Concrete local motivations are often embarrassing. For example, I find myself concretely motivated to “win” arguments, even though I'd think better of myself if I was driven by curiosity. But, like near-mode beliefs, concrete local motivations can act as a safeguard and an anchor. For example, if you become abstractly confused about meta-ethics, you'll still have a concrete desire to pull babies off train tracks. And so dialoguing with your near-mode wants and motives, like your near-mode anticipations, can help build a robust, trust-worthy mind.
Why it matters (again)
Safety skills such as the above are worth learning for three reasons.
- They help us avoid nutty actions.
- They help us reason unhesitatingly, instead of flinching away out of fear.
- They help us build a rationality for the whole mind, with the strengths of near-mode as well as of abstract reasoning.
[1] These are not the only reasons people fear thinking. At minimum, there is also:
- Fear of social censure for the new beliefs (e.g., for changing your politics, or failing to believe your friend was justified in his divorce);
- Fear that part of you will use those new beliefs to justify actions that you as a whole do not want (e.g., you may fear to read a study about upsides of nicotine, lest you use it as a rationalization to start smoking again; you may similarly fear to read a study about how easily you can save African lives, lest it ends up prompting you to donate money).
[2] Many points in this article, and especially in the "explicit reasoning is often nuts" section, are stolen from Michael Vassar. Give him the credit, and me the blame and the upvotes.
[3] Carl points out that Eliezer points out that studies show we can't. But it seems like explicitly modeling when your friend is and isn't accurate, and when explicit models have and haven't led you to good actions, should at least help.
Well, yeah. Scientology is sort of the Godwin example of dangerous infectious memes. But I've found the lessons most useful in dealing with lesser ones, and it taught me superlative skills in how to inspect memes and logical results in a sandbox.
Perhaps these have gone to the point where I've recompartmentalised and need to aggressively decompartmentalise again. Anna Salamon's original post is IMO entirely too dismissive of the dangers of decompartmentalisation in the Phil Goetz post, which is about people who accidentally decompartmentalise memetic toxic waste and come to the startling realisation they need to bomb academics or kill the infidel or whatever. But you always think it'll never happen to you. And this is false, because you're running on unreliable hardware with all manner of exploits and biases, and being able to enumerate them doesn't grant you immunity. And there are predators out there, evolved to eat people who think it'll never happen to them.
My own example: I signed up for a multi-level marketing company, which only cost me a year of my life and most of my friends. I should detail precisely how I reasoned myself into it. It was all very logical. The process of reasoning oneself into the mouth of a highly evolved predator tends to be. The cautions my friends and family gave me were all heuristic. This was before I studied Scientology in detail, which would I suspect have given me some immunity.
I should write a post on the subject (see my recent comments) except Anna's post covers quite a lot of it.
I would be interested in reading this, and especially about what caused the initial vulnerability.