...true beliefs that will have negative signaling consequences or drastically reduce one's happiness.
Do you have some examples of such beliefs?
...true beliefs that will have negative signaling consequences or drastically reduce one's happiness.
Do you have some examples of such beliefs?
Here's a good example: "The paper that supports the conventional wisdom is Jensen, A. R., & Reynolds, C. R. (1983). It finds that females have a 101.41 mean IQ with a 13.55 standard deviation versus males that have a 103.08 mean IQ with a 14.54 standard deviation."
Now, people will lynch you for that difference of 1.67 IQ points (1.63 %), unless you make excuses for some kind of bias or exper...
Or: “I don’t want to think about that! I might be left with mistaken beliefs!”
Related to: Rationality as memetic immune disorder; Incremental progress and the valley; Egan's Law.
tl;dr: Many of us hesitate to trust explicit reasoning because... we haven’t built the skills that make such reasoning trustworthy. Some simple strategies can help.
Most of us are afraid to think fully about certain subjects.
Sometimes, we avert our eyes for fear of unpleasant conclusions. (“What if it’s my fault? What if I’m not good enough?”)
But other times, oddly enough, we avert our eyes for fear of inaccurate conclusions.[1] People fear questioning their religion, lest they disbelieve and become damned. People fear questioning their “don't walk alone at night” safety strategy, lest they venture into danger. And I find I hesitate when pondering Pascal’s wager, infinite ethics, the Simulation argument, and whether I’m a Boltzmann brain... because I’m afraid of losing my bearings, and believing mistaken things.
Ostrich Theory, one might call it. Or I’m Already Right theory. The theory that we’re more likely to act sensibly if we don’t think further, than if we do. Sometimes Ostrich Theories are unconsciously held; one just wordlessly backs away from certain thoughts. Other times full or partial Ostrich Theories are put forth explicitly, as in Phil Goetz’s post, this LW comment, discussions of Tetlock's "foxes vs hedgehogs" research, enjoinders to use "outside views", enjoinders not to second-guess expert systems, and cautions for Christians against “clever arguments”.
Explicit reasoning is often nuts
Ostrich Theories sound implausible: why would not thinking through an issue make our actions better? And yet examples abound of folks whose theories and theorizing (as contrasted with their habits, wordless intuitions, and unarticulated responses to social pressures or their own emotions) made significant chunks of their actions worse. Examples include, among many others:
In fact, the examples of religion and war suggest that the trouble with, say, Kaczynski wasn’t that his beliefs were unusually crazy. The trouble was that his beliefs were an ordinary amount of crazy, and he was unusually prone to acting on his beliefs. If the average person started to actually act on their nominal, verbal, explicit beliefs, they, too, would in many cases look plumb nuts. For example, a Christian might give away all their possessions, rejoice at the death of their children in circumstances where they seem likely to have gone to heaven, and generally treat their chances of Heaven vs Hell as their top priority. Someone else might risk their life-savings betting on an election outcome or business about which they were “99% confident”.
That is: many peoples’ abstract reasoning is not up to the task of day to day decision-making. This doesn't impair folks' actions all that much, because peoples' abstract reasoning has little bearing on our actual actions. Mostly we just find ourselves doing things (out of habit, emotional inclination, or social copying) and make up the reasons post-hoc. But when we do try to choose actions from theory, the results are far from reliably helpful -- and so many folks' early steps toward rationality go unrewarded.
We are left with two linked barriers to rationality: (1) nutty abstract reasoning; and (2) fears of reasoned nuttiness, and other failures to believe that thinking things through is actually helpful.[2]
Reasoning can be made less risky
Much of this nuttiness is unnecessary. There are learnable skills that can both make our abstract reasoning more trustworthy and also make it easier for us to trust it.
Here's the basic idea:
If you know the limitations of a pattern of reasoning, learning better what it says won’t hurt you. It’s like having a friend who’s often wrong. If you don’t know your friend’s limitations, his advice might harm you. But once you do know, you don’t have to gag him; you can listen to what he says, and then take it with a grain of salt.[3]
Reasoning is the meta-tool that lets us figure out what methods of inference are trustworthy where. Reason lets us look over the track records of our own explicit theorizing, outside experts' views, our near-mode intuitions, etc. and figure out which is how trustworthy in a given situation.
If we learn to use this meta-tool, we can walk into rationality without fear.
Skills for safer reasoning
1. Recognize implicit knowledge.
Recognize when your habits, or outside customs, are likely to work better than your reasoned-from-scratch best guesses. Notice how different groups act and what results they get. Take pains to stay aware of your own anticipations, especially in cases where you have explicit verbal models that might block your anticipations from view. And, by studying track records, get a sense of which prediction methods are trustworthy where.
Use track records; don't assume that just because folks' justifications are incoherent, the actions they are justifying are foolish. But also don't assume that tradition is better than your models. Be empirical.
2. Plan for errors in your best-guess models.
We tend to be overconfident in our own beliefs, to overestimate the probability of conjunctions (such as multi-part reasoning chains), and to search preferentially for evidence that we’re right. Put these facts together, and theories folks are "almost certain" of turn out to be wrong pretty often. Therefore:
3. Beware rapid belief changes.
Some people find their beliefs changing rapidly back and forth, based for example on the particular lines of argument they're currently pondering, or the beliefs of those they've recently read or talked to. Such fluctuations are generally bad news for both the accuracy of your beliefs and the usefulness of your actions. If this is your situation:
4. Update your near-mode anticipations, not just your far-mode beliefs.
Sometimes your far-mode is smart and you near-mode is stupid. For example, Yvain's rationalist knows abstractly that there aren’t ghosts, but nevertheless fears them. Other times, though, your near-mode is smart and your far-mode is stupid. You might “believe” in an afterlife but retain a concrete, near-mode fear of death. You might advocate Communism but have a sinking feeling in your stomach as you conduct your tour of Stalin’s Russia.
Thus: trust abstract reasoning or concrete anticipations in different situations, according to their strengths. But, whichever one you bet your actions on, keep the other one in view. Ask it what it expects and why it expects it. Show it why you disagree (visualizing your evidence concretely, if you’re trying to talk to your wordless anticipations), and see if it finds your evidence convincing. Try to grow all your cognitive subsystems, so as to form a whole mind.
5. Use raw motivation, emotion, and behavior to determine at least part of your priorities.
One of the commonest routes to theory-driven nuttiness is to take a “goal” that isn’t your goal. Thus, folks claim to care “above all else” about their selfish well-being, the abolition of suffering, an objective Morality discoverable by superintelligence, or average utilitarian happiness-sums. They then find themselves either without motivation to pursue “their goals”, or else pulled into chains of actions that they dread and do not want.
Concrete local motivations are often embarrassing. For example, I find myself concretely motivated to “win” arguments, even though I'd think better of myself if I was driven by curiosity. But, like near-mode beliefs, concrete local motivations can act as a safeguard and an anchor. For example, if you become abstractly confused about meta-ethics, you'll still have a concrete desire to pull babies off train tracks. And so dialoguing with your near-mode wants and motives, like your near-mode anticipations, can help build a robust, trust-worthy mind.
Why it matters (again)
Safety skills such as the above are worth learning for three reasons.
[1] These are not the only reasons people fear thinking. At minimum, there is also:
[2] Many points in this article, and especially in the "explicit reasoning is often nuts" section, are stolen from Michael Vassar. Give him the credit, and me the blame and the upvotes.
[3] Carl points out that Eliezer points out that studies show we can't. But it seems like explicitly modeling when your friend is and isn't accurate, and when explicit models have and haven't led you to good actions, should at least help.