Reading your post felt very weird to me, as if you were deliberately avoiding the obvious conclusion from your own examples! Do you really believe that people follow kosher or die in religious wars due to using abnormally explicit reasoning? The common thing about your examples is putting ideals over personal gain, not reasoning over instinct. Too much acting on explicitly stated values, not explicitly stated beliefs. In truth, using rationality for personal gain isn't nearly as dangerous as idealism/altruism and doesn't seem to require the precautions you go on to describe. If any of the crazy things I do failed to help me, I'd just stop doing them.
Which prompts a question to everyone: what crazy things do you do that help you? (Rather than help save the light cone or something.)
I strongly disagree. I specifically think people DO die in religious wars due to using abnormally explicit reasoning.
I also agree with MichaelVassar, I think much religious harm comes from using abnormally explicit reasoning.
This is because (I hypothesize that) great moral failures come about when a group of people (often, a religion, but any ideological group) think they've hit upon an absolute "truth" and then expect they can apply this truth to wholly develop an ethical code. The evil comes in when they mistakenly think that morality can be described by some set of universal and self-consistent principles, and they apply a principle valid in one context to another with disastrous results. When they apply the principle to the inappropriate domain, they should feel a twinge of conscience, but they override this twinge with their reason -- they believe in this original principle, and it deduces this thing here, which is correct, so that thing over there that it also deduces must also be correct. In the end, they use reason to override their natural human morality.
The Nazis are the main example I have in mind, but to look at a less painful example, the Catholic church is another example of over-extending principles due to reasoning. Valuing human life and general societal openness to pr...
A quote from the linked-to "cautions for Christians against clever arguments”, to save others the pain of wading through it to figure out what it's talking about:
It always begins the same way. They swallow first the rather subtle line that it is necessary for each to think for himself, to judge everything by the light of whether it appears reasonable to him. There is never any examination of that basic premise, though what it is really saying is that the mind of man becomes the ultimate test, the ultimate authority of all life. It is necessary for man to reason and it is necessary for him to think for himself and to examine things. But we are creatures under God, and we never can examine accurately or rightly until we begin with the basic recognition that all of man's thinking, blinded and shadowed as it is with the confusion of sin, must be measured by the Word of God. There is the ultimate authority.
Thanks for a ton of great tips Anna, just wanted to nit pick on one:
Remember that if reading X-ist books will predictably move your beliefs toward X, and you know there are X-ist books out there, you should move your beliefs toward X already. Remember the Conservation of Expected Evidence more generally.
I suspect that reading enough X-ist books will affect my beliefs for any X (well, nearly any). The key word is enough -- I suspect that fully immersing myself in just about any subject, and surround myself entirely by people who advocate it, would significantly alter my beliefs, regardless of the validity of X.
It wouldn't necessarily make you a believer. Worked example: I joined in the battle of Scientology vs. the Net in 1995 and proceeded to learn a huge amount about Scientology and everything to do with it. I slung the jargon so well that some ex-Scientologists refused to believe I'd never been a member (though I never was). I checked my understanding with ex-Scientologists to see if my understanding was correct, and it largely was.
None of this put me an inch toward joining up. Not even slightly.
To understand something is not to believe it.
That said, it'll provide a large and detailed pattern in your head for you to form analogies with, good or bad.
Alexflint said:
I suspect that fully immersing myself in just about any subject, and surround myself entirely by people who advocate it, would significantly alter my beliefs, regardless of the validity of X.
It seems that your experience was learning about anti-Scientology facts while surrounded by people who advocated anti-Scientology.
So it's completely unsurprising that you remained anti-Scientology.
Had you been learning about Scientology from friends of yours who were Scientologists, you might have had a much harder time maintaining your viewpoint.
Similarly, learning about christianity through the skeptics annotated bible is very different from learning about christianity through a christian youth group.
Well, yeah. Scientology is sort of the Godwin example of dangerous infectious memes. But I've found the lessons most useful in dealing with lesser ones, and it taught me superlative skills in how to inspect memes and logical results in a sandbox.
Perhaps these have gone to the point where I've recompartmentalised and need to aggressively decompartmentalise again. Anna Salamon's original post is IMO entirely too dismissive of the dangers of decompartmentalisation in the Phil Goetz post, which is about people who accidentally decompartmentalise memetic toxic waste and come to the startling realisation they need to bomb academics or kill the infidel or whatever. But you always think it'll never happen to you. And this is false, because you're running on unreliable hardware with all manner of exploits and biases, and being able to enumerate them doesn't grant you immunity. And there are predators out there, evolved to eat people who think it'll never happen to them.
My own example: I signed up for a multi-level marketing company, which only cost me a year of my life and most of my friends. I should detail precisely how I reasoned myself into it. It was all very logical. The process of re...
Anna - I'm favorably impressed by this posting! Thanks for making it. It makes me feel a lot better about what SIAI staff mean by rationality.
In the past I've had concerns that SIAI's focus on a future intelligence explosion may be born of explicit reasoning that's nuts (in the sense of your article), and the present posting does a fair amount to assuage my concerns - I see it as a strong indicator that you and some of the other SIAI staff are vigilant against the dangers of untrustworthy explicit reasoning.
Give Michael my regards.
If you can predict what you'll believe a few years from now, consider believing that already.
I've been thinking about this lately. Specifically, I've been considering the following question:
If you were somehow obliged to pick which of your current beliefs you'd disagree with in eight years time, with real and serious consequences for picking correctly or incorrectly, what criteria would you use to pick them?
I'm pretty sure that difficulty in answering this question is a good sign.
I think it is a bit unfair to frame arguments to trust outside views or established experts as arguments to not think about things. Rather, they are arguments about how much one should trust inside views or your own thoughts relative to other sources.
Thanks for posting this, it's awesome.
I particularly endorse trying to build things out of your abstract reasoning, as a way of moving knowledge from "head-knowledge" to "fingers-knowledge".
Regarding this sentence: "Remember that if reading X-ist books will predictably move your beliefs toward X, and you know there are X-ist books out there, you should move your beliefs toward X already."
Since I'm irrational (memetic insecure) and persuasive deceptions (memetic rootkits) exist, the sentence needs some qualifier. Maybe: "If you believe that the balance of the unknown arguments favor believing X, then you have reason to believe X."
Make every link in a chain of argument explicit. Most of the weirder conclusions I have seen in my own and others' beliefs have come about because they had conflated several different lines of reasoning or have jumped over several steps that appeared "obvious" but that included a mistaken assumption, but were never noticed because they weren't spelled out explicitly.
Also, be very careful about not confusing different meanings of a word, sometimes these can be very subtle so you need to be watchful.
For actually reasoning with an argument, keep it...
There is a much simpler way of winning than carefully building up your abstract-reasoning ability to the point where it produces usefully accurate, unbiased, well-calibrated probability distributions over relevant outcome spaces.
The simpler way is just to recognize that, as a human in a western society, you won't lose much more or win much more than the other humans around you. So you may as well dump the abstract reasoning and rationality, and pick some humans who seem to live relatively non-awful lives (e.g. your colleagues/classmates) and take whatever...
The main problem I see with this post is that it assumes that it's always advantageous to find out the truth and update one's beliefs towards greater factual and logical accuracy. Supposedly, the only danger of questioning things too much is that attempts to do so might malfunction and instead move one towards potentially dangerous false beliefs (which I assume is meant by the epithets such as "nutty" and "crazy").
Yet I find this assumption entirely unwarranted. The benefits of holding false beliefs can be greater than the costs. This ...
The problem with the most poignant examples is that it's impossible to find beliefs that signal low status and/or disreputability in the modern mainstream society, and are also uncontroversially true. The mention of any concrete belief that is, to the best of my knowledge, both true and disreputable will likely lead to a dispute over whether it's really true. Yet, claiming that there are no such beliefs at all is a very strong assertion, especially considering that nobody could deny that this would constitute a historically unprecedented state of affairs.
To avoid getting into such disputes, I'll give only two weaker and (hopefully) uncontroversial examples.
As one example, many people have unrealistic idealized views of some important persons in their lives -- their parents, for example, or significant others. If they subject these views to rational scrutiny, and perhaps also embark on fact-finding missions about these persons' embarrassing past mistakes and personal failings, their new opinions will likely be more accurate, but it may make them much unhappier, and possibly also shatter their relationships, with all sorts of potential awful consequences. This seems like a clear an...
Propositional calculus is brittle. A contradiction implies everything.
In Set theory, logic and their limitations Machover calls this the Inconsistency Effect. I'm surprised to find that this doesn't work well as a search term. Hunting I find:
In classical logic, a contradiction is always absurd: a contradiction implies everything.
Another trouble is that the logical conditional is such that P AND ¬P ⇒ Q, regardless of what Q is taken to mean. That is, a contradiction implies that absolutely everything is true.
Any false fact that y...
I wouldn't say that this is a fear of an "inaccurate conclusion," as you say. Instead, it's a fear of losing control and becoming disoriented: "losing your bearings" as you said . You're afraid that your most trustworthy asset - your ability to reason through a problem and come out safe on the other side; an asset that should never fail you - will fail you and lead you down a path you don't want to go. In fact, it could lead to Game Over if you let that lead you to kill or be killed, as you highlight in your examples of the Unabomber...
For example, a Christian might give away all their possessions, rejoice at the death of their children in circumstances where they seem likely to have gone to heaven, and generally treat their chances of Heaven vs Hell as their top priority.
Steven Landsburg used this reasoning, combined with the fact that Christians don't generally do this, to conclude not that Christians don't act on their beliefs, but that Christians don't generally believe what they claim to believe. I think the different conclusion is reached because he assigns a lot more rationa...
I, too, really appreciated this post.
Unfortunately, though, I think that you missed one of the most important skills for safer reasoning -- recognizing and acknowledging assumptions (and double-checking that they are still valid). Many of the most dangerous failures of reasoning occur when a normally safe assumption is carried over to conditions where it is incorrect. Diving three feet into water that is unobstructed and at least five feet deep won't lead to a broken neck -- unless the temperature is below zero centigrade.
I like this; it seems practical and realistic. As a point of housekeeping, double-check the spaces around your links--some of them got lost somewhere. :)
Posted on behalf of someone else who had the following comment:
I would have liked for [this post] to contain details about how to actually do this:
If you're uncomfortable not knowing, so that you find yourself grasping for one framework after another, build your tolerance for ambiguity, complexity, and unknowns.
People fear questioning their “don't walk alone at night” safety strategy, lest they venture into danger.
I routinely walk (and run) alone at night. Indeed, I plan on going for a 40k run/walk alone tonight. Yet I observe that walking alone at night does really seem like it involves danger - particularly if you are an attractive female.
I actually know people (ok, so I am using my sisters as anecdotes) who are more likely to fear considering a "don't walk alone at night" strategy because it may mean they would have to sacrifice their exercise routine. Fortunately Melbourne is a relatively safe city as far as 'cities in the world' go.
I'd just like to point out that 5 looks like a specific application of 1. Recognizing that your "goal" is just what you think is your goal, and you can be mistaken about it in many ways.
Minor typo -"denotationally honest, including refusing to jobs that required a government loyalty oath" - no need for "to" before "jobs".
Or: “I don’t want to think about that! I might be left with mistaken beliefs!”
Related to: Rationality as memetic immune disorder; Incremental progress and the valley; Egan's Law.
tl;dr: Many of us hesitate to trust explicit reasoning because... we haven’t built the skills that make such reasoning trustworthy. Some simple strategies can help.
Most of us are afraid to think fully about certain subjects.
Sometimes, we avert our eyes for fear of unpleasant conclusions. (“What if it’s my fault? What if I’m not good enough?”)
But other times, oddly enough, we avert our eyes for fear of inaccurate conclusions.[1] People fear questioning their religion, lest they disbelieve and become damned. People fear questioning their “don't walk alone at night” safety strategy, lest they venture into danger. And I find I hesitate when pondering Pascal’s wager, infinite ethics, the Simulation argument, and whether I’m a Boltzmann brain... because I’m afraid of losing my bearings, and believing mistaken things.
Ostrich Theory, one might call it. Or I’m Already Right theory. The theory that we’re more likely to act sensibly if we don’t think further, than if we do. Sometimes Ostrich Theories are unconsciously held; one just wordlessly backs away from certain thoughts. Other times full or partial Ostrich Theories are put forth explicitly, as in Phil Goetz’s post, this LW comment, discussions of Tetlock's "foxes vs hedgehogs" research, enjoinders to use "outside views", enjoinders not to second-guess expert systems, and cautions for Christians against “clever arguments”.
Explicit reasoning is often nuts
Ostrich Theories sound implausible: why would not thinking through an issue make our actions better? And yet examples abound of folks whose theories and theorizing (as contrasted with their habits, wordless intuitions, and unarticulated responses to social pressures or their own emotions) made significant chunks of their actions worse. Examples include, among many others:
In fact, the examples of religion and war suggest that the trouble with, say, Kaczynski wasn’t that his beliefs were unusually crazy. The trouble was that his beliefs were an ordinary amount of crazy, and he was unusually prone to acting on his beliefs. If the average person started to actually act on their nominal, verbal, explicit beliefs, they, too, would in many cases look plumb nuts. For example, a Christian might give away all their possessions, rejoice at the death of their children in circumstances where they seem likely to have gone to heaven, and generally treat their chances of Heaven vs Hell as their top priority. Someone else might risk their life-savings betting on an election outcome or business about which they were “99% confident”.
That is: many peoples’ abstract reasoning is not up to the task of day to day decision-making. This doesn't impair folks' actions all that much, because peoples' abstract reasoning has little bearing on our actual actions. Mostly we just find ourselves doing things (out of habit, emotional inclination, or social copying) and make up the reasons post-hoc. But when we do try to choose actions from theory, the results are far from reliably helpful -- and so many folks' early steps toward rationality go unrewarded.
We are left with two linked barriers to rationality: (1) nutty abstract reasoning; and (2) fears of reasoned nuttiness, and other failures to believe that thinking things through is actually helpful.[2]
Reasoning can be made less risky
Much of this nuttiness is unnecessary. There are learnable skills that can both make our abstract reasoning more trustworthy and also make it easier for us to trust it.
Here's the basic idea:
If you know the limitations of a pattern of reasoning, learning better what it says won’t hurt you. It’s like having a friend who’s often wrong. If you don’t know your friend’s limitations, his advice might harm you. But once you do know, you don’t have to gag him; you can listen to what he says, and then take it with a grain of salt.[3]
Reasoning is the meta-tool that lets us figure out what methods of inference are trustworthy where. Reason lets us look over the track records of our own explicit theorizing, outside experts' views, our near-mode intuitions, etc. and figure out which is how trustworthy in a given situation.
If we learn to use this meta-tool, we can walk into rationality without fear.
Skills for safer reasoning
1. Recognize implicit knowledge.
Recognize when your habits, or outside customs, are likely to work better than your reasoned-from-scratch best guesses. Notice how different groups act and what results they get. Take pains to stay aware of your own anticipations, especially in cases where you have explicit verbal models that might block your anticipations from view. And, by studying track records, get a sense of which prediction methods are trustworthy where.
Use track records; don't assume that just because folks' justifications are incoherent, the actions they are justifying are foolish. But also don't assume that tradition is better than your models. Be empirical.
2. Plan for errors in your best-guess models.
We tend to be overconfident in our own beliefs, to overestimate the probability of conjunctions (such as multi-part reasoning chains), and to search preferentially for evidence that we’re right. Put these facts together, and theories folks are "almost certain" of turn out to be wrong pretty often. Therefore:
3. Beware rapid belief changes.
Some people find their beliefs changing rapidly back and forth, based for example on the particular lines of argument they're currently pondering, or the beliefs of those they've recently read or talked to. Such fluctuations are generally bad news for both the accuracy of your beliefs and the usefulness of your actions. If this is your situation:
4. Update your near-mode anticipations, not just your far-mode beliefs.
Sometimes your far-mode is smart and you near-mode is stupid. For example, Yvain's rationalist knows abstractly that there aren’t ghosts, but nevertheless fears them. Other times, though, your near-mode is smart and your far-mode is stupid. You might “believe” in an afterlife but retain a concrete, near-mode fear of death. You might advocate Communism but have a sinking feeling in your stomach as you conduct your tour of Stalin’s Russia.
Thus: trust abstract reasoning or concrete anticipations in different situations, according to their strengths. But, whichever one you bet your actions on, keep the other one in view. Ask it what it expects and why it expects it. Show it why you disagree (visualizing your evidence concretely, if you’re trying to talk to your wordless anticipations), and see if it finds your evidence convincing. Try to grow all your cognitive subsystems, so as to form a whole mind.
5. Use raw motivation, emotion, and behavior to determine at least part of your priorities.
One of the commonest routes to theory-driven nuttiness is to take a “goal” that isn’t your goal. Thus, folks claim to care “above all else” about their selfish well-being, the abolition of suffering, an objective Morality discoverable by superintelligence, or average utilitarian happiness-sums. They then find themselves either without motivation to pursue “their goals”, or else pulled into chains of actions that they dread and do not want.
Concrete local motivations are often embarrassing. For example, I find myself concretely motivated to “win” arguments, even though I'd think better of myself if I was driven by curiosity. But, like near-mode beliefs, concrete local motivations can act as a safeguard and an anchor. For example, if you become abstractly confused about meta-ethics, you'll still have a concrete desire to pull babies off train tracks. And so dialoguing with your near-mode wants and motives, like your near-mode anticipations, can help build a robust, trust-worthy mind.
Why it matters (again)
Safety skills such as the above are worth learning for three reasons.
[1] These are not the only reasons people fear thinking. At minimum, there is also:
[2] Many points in this article, and especially in the "explicit reasoning is often nuts" section, are stolen from Michael Vassar. Give him the credit, and me the blame and the upvotes.
[3] Carl points out that Eliezer points out that studies show we can't. But it seems like explicitly modeling when your friend is and isn't accurate, and when explicit models have and haven't led you to good actions, should at least help.