"In a sufficiently mad world, being sane is actually a disadvantage"
– Nick Bostrom
Followup to: What is rationality?
A canon of work on "rationality" has built up on Less Wrong; in What is rationality?, I listed most of the topics and paradigms that have been used extensively on Less Wrong, including: simple calculation and logic1, probability theory, cognitive biases, the theory of evolution, analytic philosophical thinking, microeconomics. I defined "Rationality" to be the ability to do well on hard decision problems, often abbreviated to "winning" - choosing actions that cause you to do very well.
However, I think that the rationality canon here on Less Wrong is not very good at causing the people who read it to actually do well at most of life's challenges. This is therefore a criticism of the LW canon.
If the standard to judge methods by is whether they give you the ability to do well on a wide range of hard real-life decision problems, with a wide range of terminal values being optimized for, then Less-Wrong-style rationality fails, because the people who read it seem to mostly only succeed at the goal that most others in society would label as "being a nerd".2 We don't seem to have a broad range of people pursuing and winning at a broad range of goals (though there are a few exceptional people here).
Although the equations of probability theory and expected utility do not state that you have to be a "Spock rationalist" to use them, in reality I see more Spock than Kirk. I myself am not exempt from this critique.
What, then, is missing?
The problem, I think, is that the original motivation for Less Wrong was the bad planning decisions that society as a whole takes3. When society acts, it tends to benefit most when it acts in what I would call the Planning model of winning, where reward is a function of the accuracy of beliefs and the efficacy of explicitly reasoned plans.
But individuals within a society do not get their rewards solely based upon the quality of their plans: we are systematically rewarded and punished by the environment around us by:
- Our personality traits and other psychological factors such as courage, happiness set-point, self-esteem, etc.
- The group we are a member of, especially our close friends and associates.
- Our skill in dealing with people, which we might call "emotional intelligence".
- The shibboleths we display, the signals we send out (especially signaling-related beliefs) and our overall style.
The Less Wrong canon therefore pushes people who read it to concentrate on mostly the wrong kind of thought processes. The "planning model" of winning is useful for thinking about what people call analytical skill, which is in turn useful for solitary challenges that involve a detailed mechanistic environment that you can manipulate. Games like Alpha Centauri and Civilization come to mind, as do computer programming, mathematics, science and some business problems.
Most of the goals that most people hold in life cannot be solved by this kind of analytic planning alone, but the ones that can (such as how to code, do math or physics) are heavily overrepresented on LW. The causality probably runs both ways: people whose main skills are analytic are attracted to LW because the existing discussion on LW is very focused on "nerdy" topics, and the kinds of posts that get written tend to focus on problems that fall into the planning model because that's what the posters like thinking about.
1: simple calculation and logic is not usually mentioned on LW, probably because most people here are sufficiently well educated that these skills are almost completely automatic for them. In effect, it is a solved problem for the LW community. But out in the wider world, the sanity waterline is much lower. Most people cannot avoid simple logical errors such as affirming the consequent, and cannot solve simple Fermi Problems.
2: I am not trying to cast judgment on the goal of being an intellectually focused, not-conventionally-socializing person: if that is what a person wants, then from their axiological point of view it is the best thing in the world.
3: Not paying any attention to futurist topics like cryonics or AI which matter a lot, making dumb decisions about how to allocate charity money, making relatively dumb decisions in matters of how to efficiently allocate resources to make the distribution of human experiences better overall.
The second I think. (I feel about the same for topics in which I have shown interest, so it's not about my level of interest.)
If I wanted to force a conversation about a particular subculture or hot-button topic not obviously related to rationality, and I were called out on it, I could probably contrive a defensible list of ways my desired subject relates to rationality. For example, I took your list of bullet points for PUA and adapted most of them to race and IQ (a subject I'm more familiar with):
In spite of the connections to rationality just listed, I'd expect a discussion of race and IQ to flirt with the failure modes of (1) adversarial nitpicking of minutiae and/or (2) arguing about the politics surrounding the topic and not the topic itself. The first time I walked into this argument on Less Wrong, I felt I ended up in the first failure mode. When it came up again in this month's Open Thread, the poster starting the discussion seemed to want to discuss the politics of it, and I didn't see the resulting subthread as casting new light on rationality.
I say this even though threads like that do often have people making and evaluating truth-claims; I just don't count that kind of thing as 'real' rationality unless it could plausibly make a rationality lightbulb go off in my head ('Ooooohhh, I never got Eliezer's exposition of causal screening before, but this example totally makes it obvious to me' - stuff like that). I can find intelligent arguments about various subcultures and issues elsewhere on the internet - I expect something else, or maybe something more specific, from LW.
This doesn't mean I don't/can't/won't learn about rationality in a hands on way - applying what you learn is how you know you've learned it. Still, on LW I expect discussions presented as 'here is a general point about rationality, demonstrated with a few little examples from my pet issue' to stay on topic more effectively than if they're presented as 'here is my pet issue with a side serving of rationality,' and I expect that whether or not I can draw abstract connections between my pet topic and rationality.
Hmmm. I've written a lot here because I don't feel like I'm adequately communicating what I mean. I suppose what I'm thinking is something like a generalization of 'Politics is the Mind-Killer' - even things tangentially related to rationality can mind-kill, so I'm wary about what I label on-topic. Quite likely more wary than whoever's reading this.
On a side note, I tried profiling (albeit crudely) a thread about a hot topic to find out how well it focused on relevant data and the elements of rationality discussed on LW. I picked this month's Open Thread's subthread about race and IQ because it wasn't very long and I posted in it, so I had some idea how it progressed. On each comment I ticked off whether it
with the rationale that comments that did any of these were more likely to be rationality-relevant than those that didn't. (I also tried ticking off which comments were mostly focused on politics and which weren't, but I couldn't do that quickly and fairly, so I didn't bother.) Here's my data for anyone who wants to check my work.
The subthread has 74 comments: 13 mentioned evidence, 3 made a testable prediction, 10 explicitly made connections to LWish heuristics and catchphrases, and 50 did none of these. Those 50 comments had a mean score of 2.7; the 24 comments that mentioned data/predictions/rationality tropes had a mean score of 2.4.
That suggests that not only were the overtly rationality-ish comments outnumbered, but they scored more poorly. I wouldn't want to generalize from this quick little survey, but I do wonder whether the same trend would show up in arguments about feminism, PUA, global warming, 9/11, or other subjects that can be controversial here.
Regarding the ratios of comment types have you compared that at all to subthreads about other topics, possibly less controversial ones? Without some idea of the usual level for an equivalent LW conversation about a less controversial topic, it is very hard to evaluate this data.
I'm not sure incidentally that I agree with your breakdown of comments. For example, you include the comment that started off the conversation as in none of the categories. Even just asking a worthwhile question should be worth something. And since this comment was at +17, even just... (read more)