"In a sufficiently mad world, being sane is actually a disadvantage"
– Nick Bostrom
Followup to: What is rationality?
A canon of work on "rationality" has built up on Less Wrong; in What is rationality?, I listed most of the topics and paradigms that have been used extensively on Less Wrong, including: simple calculation and logic1, probability theory, cognitive biases, the theory of evolution, analytic philosophical thinking, microeconomics. I defined "Rationality" to be the ability to do well on hard decision problems, often abbreviated to "winning" - choosing actions that cause you to do very well.
However, I think that the rationality canon here on Less Wrong is not very good at causing the people who read it to actually do well at most of life's challenges. This is therefore a criticism of the LW canon.
If the standard to judge methods by is whether they give you the ability to do well on a wide range of hard real-life decision problems, with a wide range of terminal values being optimized for, then Less-Wrong-style rationality fails, because the people who read it seem to mostly only succeed at the goal that most others in society would label as "being a nerd".2 We don't seem to have a broad range of people pursuing and winning at a broad range of goals (though there are a few exceptional people here).
Although the equations of probability theory and expected utility do not state that you have to be a "Spock rationalist" to use them, in reality I see more Spock than Kirk. I myself am not exempt from this critique.
What, then, is missing?
The problem, I think, is that the original motivation for Less Wrong was the bad planning decisions that society as a whole takes3. When society acts, it tends to benefit most when it acts in what I would call the Planning model of winning, where reward is a function of the accuracy of beliefs and the efficacy of explicitly reasoned plans.
But individuals within a society do not get their rewards solely based upon the quality of their plans: we are systematically rewarded and punished by the environment around us by:
- Our personality traits and other psychological factors such as courage, happiness set-point, self-esteem, etc.
- The group we are a member of, especially our close friends and associates.
- Our skill in dealing with people, which we might call "emotional intelligence".
- The shibboleths we display, the signals we send out (especially signaling-related beliefs) and our overall style.
The Less Wrong canon therefore pushes people who read it to concentrate on mostly the wrong kind of thought processes. The "planning model" of winning is useful for thinking about what people call analytical skill, which is in turn useful for solitary challenges that involve a detailed mechanistic environment that you can manipulate. Games like Alpha Centauri and Civilization come to mind, as do computer programming, mathematics, science and some business problems.
Most of the goals that most people hold in life cannot be solved by this kind of analytic planning alone, but the ones that can (such as how to code, do math or physics) are heavily overrepresented on LW. The causality probably runs both ways: people whose main skills are analytic are attracted to LW because the existing discussion on LW is very focused on "nerdy" topics, and the kinds of posts that get written tend to focus on problems that fall into the planning model because that's what the posters like thinking about.
1: simple calculation and logic is not usually mentioned on LW, probably because most people here are sufficiently well educated that these skills are almost completely automatic for them. In effect, it is a solved problem for the LW community. But out in the wider world, the sanity waterline is much lower. Most people cannot avoid simple logical errors such as affirming the consequent, and cannot solve simple Fermi Problems.
2: I am not trying to cast judgment on the goal of being an intellectually focused, not-conventionally-socializing person: if that is what a person wants, then from their axiological point of view it is the best thing in the world.
3: Not paying any attention to futurist topics like cryonics or AI which matter a lot, making dumb decisions about how to allocate charity money, making relatively dumb decisions in matters of how to efficiently allocate resources to make the distribution of human experiences better overall.
I didn't actually say anything about my prior probability. I just said I went from a 'pretty low' probability of some kind of conspiracy to a slightly higher probability based on this new information.
Nonetheless, I think you are wrong to say this is a meaningless statement. I think there is a real phenomenon of 'conspiracy theories' which share certain features and which in my opinion tend to lead people to place unduly high probabilities on certain types of explanations for events by playing into natural biases in human thought. Because I believe in this pattern of poorly calibrated estimates, when I see a theory that fits the pattern I apply a discount factor to the arguments of people proposing it.
It is also difficult to organize and maintain a conspiracy so even independent of the effect I describe above an explanation that involves an elaborate conspiracy has a lower prior than an explanation that does not, all else being equal. It is not necessary for this to be quantified for it to be meaningful, a qualitative use of priors is still a useful aid to reasoning.
One reason the new information I mentioned above raised my estimate is that it overcame one major problem I have with the conspiracy theory explanations which is lack of a motive that I could understand. Given my broader understanding of geo-politics the disappearance of a large quantity of physical gold seems like a strong motive for some kind of government cover-up and a clearer motive for co-conspirators (government or otherwise) in the attack.