HPMoR is very very popular and broadly appealing (as rationality lit goes), so that seems to be our biggest leverage point for spreading LW to people who aren't already academics or programmers, like the secularist and wider geeknerd communities.
Currently, we seem to be making only a little use of that resource for sustained, active, explicit community-building outreach. LW is not optimized for community discussions between any people who haven't already spent a few months or years studying mathematics, programming, a very specific flavor of analytic philosophy, or past LW posts like the Sequences. We're catching tons of fish friends, and throwing nearly all of them back in the ocean. The only non-LW community that seems targeted at HPMoR people is the reddit, but we're doing almost nothing to make that reddit useful for rationality training, or appealing to any people who want to do more than geek out about the details of the plot of HPMoR itself. Plus reddit is not a great environment in general if we want to experiment, or to appeal to whoever LW doesn't appeal to.
I suggest: Start a new website as a community hub for HPMoR fans, and more generally for the demographic 'I'm not ve...
I have an intuition that a better future would be one where the concept of rationality (maybe called something different, but the same idea) is normal.
I am highly skeptical of this happening with human psychology kept constant, basically because I think rationality is de facto impossible for humans who are not at least ~2 standard deviations smarter than the mean. (I also suspect that most LWers have bad priors about what mean intelligence looks like, including me.)
I think a more achievable goal is to make the concept of rationality cool. Being a movie star, for example, is cool but not normal. Rationality not being cool prevents otherwise sufficiently smart people from exploring it. My model of what raising the sanity waterline looks like in the short- to medium-term is to start from the smartest people (these are simultaneously the easiest and the highest-value people to make more rational) and work down the intelligence ladder from there.
I think 'can we make everyone rational?' is probably the wrong question. Better questions:
How much more rational could we make 2013 average-IQ people, by modifying their cultural environment and education? (That is, without relying on things like surgical or genetic modification.) What's the realistic limit of improvement, and when would diminishing returns make investing in further education a waste?
How do specific rationality skills vary in teachability? Are there some skills that are especially easy to culturally transmit (i.e., 'make cool' in a behavior-modifying way) or to instill in ordinary people?
How hard would the above approaches be? How costly is the required research and execution?
In addition to the obvious direct benefits of being more rational (which by definition means 'people make decisions that get them more of what they want' and 'people's beliefs are better maps'), how big are indirect benefits like Qiaochu's 'smart people see rationality as more valuable', or 'governments and individuals fund altruism (including rationality training) more effectively', or 'purchasing and voting habits are more globally beneficial'?
Suppose we were having this discussio...
The distinction I'm trying to make is between giving people optimized habits and memes as a package that they don't examine and giving people the skills to optimize their own habits and memes (by examining their current habits and memes). It's the latter I mean when I refer to spreading rationality, and it's the latter I expect to be quite difficult to do to people who aren't above a certain level of intelligence. It's the former I don't want to call spreading rationality; I want to call it something like "optimizing culture."
I don't think I'm talking about metarationality, but I might be (or maybe I think that rationality just is metarationality). Let me be more specific: let's pretend, for the sake of argument, that the rationalist community finds out that jogging is an optimal habit for various reasons. I would not call telling people they should jog (e.g. by teaching it in gym class in schools) spreading rationality. Spreading rationality to me is more like giving people the general tools to find out what object-level habits, such as jogging, are worth adopting.
The biggest difference between what I'm calling "rationality" and what I'm calling "optimized habits and memes" is that the former is self-correcting in a way that the latter isn't. Suppose the rationalist community later finds out that jogging is in fact not an optimal habit for various reasons. To propagate that change through a community of people who had been given a round of optimal habits and memes looks very different from propagating that change through a community of people who had been given general rationality tools.
How about habits and norms like:
(more)
It feels like it would be possible to get ordinary people to adopt at least some of these, and that their adoption would actually increase the general level of rationality.
The jargon thing.
I'm not sure this is avoidable, because precise concepts need precise terms. One of my favorite passages from Three Worlds Collide is:
But the Lady 3rd was shaking her head. "You confuse a high conditional likelihood from your hypothesis to the evidence with a high posterior probability of the hypothesis given the evidence," she said, as if that were all one short phrase in her own language.
That is the sort of concept which should be one short phrase in a language used by people who evaluate hypotheses by Bayesian thinking. Inaccessibility of jargon is oftentimes a sign of real inferential distance- someone needs to know what those two concepts are mathematically for that sentence-long explanation of a single phrase to make any sense, and explaining what those concepts are mathematically is a lecture or two by itself.
(That said, I agree that in areas where a professional community has a technical term for a concept and LW has a different technical term for that concept, replacing LW's term with the professional community's term is probably a good move.)
...But intelligence and rationality are, in theory, orthogonal, or at least not the same thing.
INTP male programmer here. I've never posted an article and rarely comment.
One thing which keeps me from doing is actually HPMoR, and EY's posts and sequences. They're all really long, and seem to be considered required reading. I know its EY's style; he seems to prefer narratives. Unfortunately I don't have a lot of time to read all that, and much prefer a terser Hansonian style.
A shorter "getting started" guide would help me. Would it help others?
I've been giving the following post list to friends, to get them into LW:
2) Cognitive Biases Potentially Affecting Judgements of Global Risk
5) What do we mean by rationality?
7) Rationality: Appreciating Cognitive Algorithms
8) Skill: The Map is Not the Territory
9) Firewalling the Optimal from the Rational
10) The Lens that Sees its Flaws
11) The Martial Art of Rationality
12) No One Can Exempt You From Rationality's Laws
I'll occasionally adapt it, e.g. Swapping the first two posts around, for someone who has an academic background and will be more interested in an academic paper to start with.
Anyone looking for my current, extended version, can find it here.
How does a doubt about the usefulness of rationality coexist with a desire to spread rationality? I see that many people can reconcile these two feelings just fine, but my mind just doesn't seem to work that way...
Well, there aren't many things that I don't doubt a little bit. I don't think this a bad thing. However, in order to get anything done in life, instead of sitting in my room thinking about how much I don't know, I have to act on a lot of things that I'm a bit doubtful about.
Yvain said that clarity of mind was one benefit he'd had. I think clarity of mind is awesome and rare, and makes it less likely that people will do stupid things for bad reasons.
I've met Yvain and I think he's fairly awesome. Likewise, of the other LW people I've met in real life, they seem disproportionately awesome–they have clarity of mind, yes, but it seems to lead into other things, like doing things differently because you're actually able to recognize the reason why you were doing non-useful things in the first place. Correlation not causation, of course, and I didn't know these people 5 years ago, and even if I had, people progress in awesomeness anyway. But still. Correlation = data in a direction, and it's not the direction of rationality being useless.
In his post Yvain distinguishes between regular rationality, which he thinks a lot of people have, and "x-rationality" that you get from long study of the Sequences' concepts. I think a lot fewer people have even regular rationality, that it's a continuum not a divide, and that strategically placed and worded LW concepts could push almost anymore further towards the 'rationality' side.
I've changed a lot
This article points out a pretty important obstruction to the general spread of rationality:
So what were the key differences [between the slow spread of antiseptics and the fast spread of anesthesia]? First, one combatted a visible and immediate problem (pain); the other combatted an invisible problem (germs) whose effects wouldn’t be manifest until well after the operation.
Rationality training does not combat a visible and immediate problem because people do not have a sense that more is possible along this dimension.
I'm going to post multiple comments here because I have several separate thoughts about these issues and I want them to be voted on separately so I can get a better idea of people's thoughts on this matter. My comments on this post will be posted as comments to this comment-- that way, people can also vote on the concept of posting multiple thoughts as separate comments.
Another problem is that there isn't really any standard "rationality test" or other ability to actually determine how rational someone is, though some limited steps have been taken in that direction. Stanovich is working on one, but it can't be expected for 3+ years at this stage.
This obviously limits the extent to which we can determine whether rationalists "actually win" (my impression, incidentally, is that they do but that there are a lot of skills that help more than current "rationality training" for the average person), what forms of rationality practice yield the most benefits, and so on.
When it comes to raising the sanity waterline, I can't help but think that the intelligence issue is likely to be a paper tiger. In fact I think LessWrong as a whole cares far too much about unusually intelligent people and that this is one of the biggest flaws of the community as a general-interest project. However, I also recognize that multiple purposes are at work here and such goal conflict may be inevitable.
Sure. Most rationality "in the wild" appears to be tacit rationality and building good habits, and I don't think that intelligence is particularly important for that. I would definitely predict, for instance, that rationality training could be accessible to people with IQs 0-1 standard deviations above the mean.
I agree on all points, but I don't see strong evidence for an easily teachable form of general rationality either, regardless of how intelligent the audience may be.
One other issue is that most people who have currently worked on developing rationality are themselves very intelligent. This sounds like it wouldn't particularly be a problem-- but as Eliezer wrote in My Way:
"If there are parts of my rationality that are visibly male, then there are probably other parts—perhaps harder to identify—that are tightly bound to growing up with Orthodox Jewish parents, or (cough) certain other unusual features of my life."
Intelligence definitely strikes me as one of those unusual features.
Perhaps it could be said that current rationality practices, designed by the highly intelligent and largely practiced by the same, require high intelligence, but it nevertheless seems far from clear that all rationality practices require high intelligence.
Trying to learn rationality skills in my 20s, when a bunch of thought patterns are already overlearned, is even more frustrating.
I may be reading a non-existent connotation into this line, but to me it pattern matches with the belief that the human mind is a blank slate, as though you would have been rational if you hadn't been corrupted by society.
Humans are, at bottom, animals, and structured around uncritical stimulus-response type behavior. It's mysterious that humans are capable of transcending these things to achieve some sort of global rationali...
No no no. Not at all. I was obviously less rational as a baby than I am now. But childhood neuroplasticity is a thing; it's easier to learn languages before age 10, and preferably before age 5. And kids have time. As a kid, when I did competitive swimming,I used to be in the pool >10 hours a week. Now, as an adult, I do taekwondo, and although there are 10 hours of class a week available, I only make 2-3.
I did learn some maladaptive thought patterns: i.e. my social anxiety spiral around "you just don't have enough natural talent to do X", and the kicker, "you aren't good enough." I know this is a pretty meaningless phrase, but it has emotional power because it's been around so long.
Some notes/reactions in random order.
First, how do you understand rationality? Can you explain it in a couple of sentences without using links and/or references to lengthier texts?
Second, there are generally reasons for why things happen the way they happen. I don't want to make an absolute out of that, but if a person's behavior is seemingly irrational to you, there's still some reason, maybe understood, maybe not understood, maybe misunderstood why the person behaves that way. Rationality will not necessarily fix that reason.
Third, consider domains like ...
Thing you might work from to get an elevator pitch on the spock problem thing: "Rationality = Ratios = Relative. Generally involves becoming less 'logical' (arguing)."
(yes, I know this isn't actualy correct, but you have to start somewhere, and I'm not good enough with words to take it further)
Suggestion: teach rationality as an open spirit of enquiry, not as a secular rleigion that will turn you into a clone of Richard Dawkins.
Introduction
Less Wrong currently represents a tiny, tiny, tiny segment of the population. In its current form, it might only appeal to a tiny, tiny segment of the population. Basically, the people who have a strong need for cognition, who are INTx on the Myers-Briggs (65% of us as per 2012 survey data), etc.
Raising the sanity waterline seems like a generally good idea. Smart people who believe stupid things, and go on to invest resources in stupid ways because of it, are frustrating. Trying to learn rationality skills in my 20s, when a bunch of thought patterns are already overlearned, is even more frustrating.
I have an intuition that a better future would be one where the concept of rationality (maybe called something different, but the same idea) is normal. Where it's as obvious as the idea that you shouldn't spend more money than you earn, or that you should live a healthy lifestyle, etc. The point isn't that everyone currently lives debt-free, eats decently well and exercises; that isn't the case; but they are normal things to do if you're a minimally proactive person who cares a bit about your future. No one has ever told me that doing taekwondo to stay fit is weird and culty, or that keeping a budget will make me unhappy because I'm overthinking thing.
I think the questions of "whether we should try to do this" and "if so, how do we do it in practice?" are both valuable to discuss, and interesting.
Is making rationality general-interest a good goal?
My intuitions are far from 100% reliable. I can think of a few reasons why this might be a bad idea:
1. A little bit of rationality can be damaging; it might push people in the direction of too much contrarianism, or something else I haven't thought of. Since introspection is imperfect, knowing a bit about cognitive biases and the mistakes that other people make might make people actually less likely to change their mind–they see other people making those well-known mistakes, but not themselves. Likewise, rationality taught only as a tool or skill, without any kind of underlying philosophy of why you should want to believe true things, might cause problems of a similar nature to martial art skills taught without the traditional, often non-violent philosophies–it could result in people abusing the skill to win fights/debates, making the larger community worse off overall. (Credit to Yan Zhang for martial arts metaphor).
2. Making the concepts general-interest, or just growing too fast, might involve watering them down or changing them in some way that the value of the LW microcommunity is lost. This could be worse for the people who currently enjoy LW even if it isn't worse overall. I don't know how easy it would be to avoid, or whether
3. It turns out that rationalists don't actually win, and x-rationality, as Yvain terms it, just isn't that amazing over-and-above already being proactive and doing stuff like keeping a budget. Yeah, you can say stuff like "the definition of rationality is that it helps you win", but if in real life, all the people who deliberately try to increase their rationality do worse off overall, by their own standards (or even equally well, but with less time left over for other fun pursuits) than the people who aim for their life goals directly, I want to know that.
4. Making rationality general-interest is a good idea, but not the best thing to be spending time and energy on right now because of Mysterious Reasons X, Y, Z. Maybe I only think it is because of my personal bias towards liking community stuff (and wishing all of my friends were also friends with each other and liked the same activities, which would simplify my social life, but probably shouldn't happen for good reasons).
Obviously, if any of these are the case, I want to know about it. I also want to know about it if there are other reasons, off my radar, why this is a terrible idea.
What has to change for this to happen?
I don't really know, or I would be doing those things already (maybe, akrasia allowing). I have some ideas, though.
1. The jargon thing. I'm currently trying to compile a list of LW/CFAR jargon as a project for CFAR, and there are lots of terms I don't know. There are terms that I've realized in retrospect that I was using incorrectly all along. This presents both a large initial effort for someone interested in learning about rationality via the LW route, and also might contribute to the looking-like-a-cult thing.
2. The gender ratio thing. This has been discussed before, and it's a controversial thing to discuss, and I don't know how much arguing about it in comments will present any solutions. It seems pretty clear that if you want to appeal to the whole population, and a group that represents 50% of the general population only represents 10% of your participants (also as per 2012 survey data, see link above), there's going to be a problem somewhere down the road.
My data point: as a female on LW, I haven't experienced any discrimination, and I'm a bit baffled as to why the gender ratio is so skewed in the first place. Then again, I've already been through the filter of not caring if I'm the only girl at a meetup group. And I do hang out in female-dominated groups (i.e. the entire field of nursing), and fit in okay, but I'm probably not all that good as a typical example to generalize from.
3. LW currently appeals to intelligent people, or at least people who self-identify as intelligent; according to the 2012 survey data, the self-reported IQ median is 138. This wouldn't be surprising, and isn't a problem until you want to appeal to more than 1% of the population. But intelligence and rationality are, in theory, orthogonal, or at least not the same thing. If I suffered a brain injury that reduced my IQ significantly but didn't otherwise affects my likes and dislikes, I expect I would still be interested in improving my rationality and think it was important, perhaps even more so, but I also think I would find it frustrating. And I might feel horribly out of place.
4. Rationality in general has a bad rap; specifically, the Spock thing. And this isn't just affecting whether or not people thing Less Wrong the site is weird; it's affecting whether they want to think about their own decision-making.
This is only what I can think of in 5 minutes...
What's already happening?
Meetup groups are happening. CFAR is happening. And there are groups out there practicing skills similar or related to rationality, whether or not they call it the same thing.
Conclusion
Rationality, Less Wrong and CFAR have, gradually over the last 2-3 years, become a big part of my life. It's been fun, and I think it's made me stronger, and I would prefer a world where as many other people as possible have that. I'd like to know if people think that's a) a good idea, b) feasible, and c) how to do it practically.