Related to: Intellectual Hipsters, X-Rationality: Not So Great, The Importance of Self-Doubt, That Other Kind of Status,
This is a scheduled upgrade of a post that I have been working on in the discussion section. Thanks to all the commenters there, and special thanks to atucker, Gabriel, Jonathan_Graehl, kpreid, XiXiDu, and Yvain for helping me express myself more clearly.
-------------------
For the most part, I am excited about growing as a rationalist. I attended the Berkeley minicamp; I play with Anki cards and Wits & Wagers; I use Google Scholar and spreadsheets to try to predict the consequences of my actions.
There is a part of me, though, that bristles at some of the rationalist 'culture' on Less Wrong, for lack of a better word. The advice, the tone, the vibe 'feels' wrong, somehow. If you forced me to use more precise language, I might say that, for several years now, I have kept a variety of procedural heuristics running in the background that help me ferret out bullshit, partisanship, wishful thinking, and other unsound debating tactics -- and important content on this website manages to trigger most of them. Yvain suggests that something about the rapid spread of positive affect not obviously tied to any concrete accomplishments may be stimulating a sort of anti-viral memetic defense system.
Note that I am *not* claiming that Less Wrong is a cult. Nobody who runs a cult has such a good sense of humor about it. And if they do, they're so dangerous that it doesn't matter what I say about it. No, if anything, "cultishness" is a straw man. Eliezer will not make you abandon your friends and family, run away to a far-off mountain retreat and drink poison Kool-Aid. But, he *might* convince you to believe in some very silly things and take some very silly actions.
Therefore, in the spirit of John Stuart Mill, I am writing a one-article attack on much of we seem to hold dear. If there is anything true about what I'm saying, you will want to read it, so that you can alter your commitments accordingly. Even if, as seems more likely, you don't believe a word I say, reading a semi-intelligent attack on your values and mentally responding to it will probably help you more clearly understand what it is that you do believe.
Wishful Thinking
As far as I can tell, some of the most prominent themes in terms of short-term behavioral advice being given on Less Wrong are:
3) Drop out of any religious groups you might belong to, and
4) Take chemical stimulants.
I don't mean to imply that this is the *only* advice given, or even that these are the four most important ones. Rather, I claim that these four topics, taken together, account for a large share of the behavioral advice dispensed here. I predict that you would find it difficult or impossible to construct a list of four other pieces of behavioral advice such that people would reliably say that your list is more fairly representative of the advice on Less Wrong. As XiXiDu was kind enough to put it, there is numerical evidence to suggest that my list is "not entirely unfounded."
The problem with this advice is that, for certain kinds of tech/geek minds, the advice is extremely well-optimized for cheaply supporting pleasurable yet useless beliefs -- a kind of wireheading that works on your prefrontal cortex instead of directly on your pleasure centers.
By cheaply, I mean that the beliefs won't really hurt you...it's relatively safe to believe in them. If you believe that traffic in the U.S. drives on the left-hand-side of the street, that's a very expensive belief; no matter happy you are thinking that you, and only you, know the amazing secret of LeftTrafficIsm, you won't get to experience that happiness for very long, because you'll get into an auto accident by tomorrow at the latest. By contrast, believing that your vote in the presidential primaries makes a big difference to the outcome of the election is a relatively cheap belief. You can go around for several years thinking of yourself as an important, empowered, responsible citizen, and all it costs you is a few hours (tops) of waiting in line at a polling station. In both cases, you are objectively and obviously wrong -- but in one case, you 'purchase' a lot of pleasure with a little bit of wrongness, and in the other case, you purchase a little bit of pleasure with a tremendous amount of wrongness.
Among the general public, one popular cheap belief to 'buy' is that a benevolent, powerful God will take you away to magical happy sunshine-land after you die, if and only if you're a nice person who doesn't commit suicide. As it's stated, indulging in that belief doesn't cost you much in terms of your ability to achieve your other goals, and it gives you something pleasant to think about. This belief is unpopular with the kind of people who are attracted to Less Wrong, even before they get here, because we are much less likely to compartmentalize our beliefs.
If you have a sufficiently separate compartment for religion, you can believe in heaven without much affecting your belief in evolution. God's up there, bacteria are down here, and that's pretty much the end of it. If you have an integrated, physical, reductionist model of the Universe, though, believing in heaven would be very expensive, because it would undermine your hard-won confidence in lots of other practically useful beliefs. If there are spirits floating around in Heaven somewhere, how do you know there aren't spirits in your water making homeopathy work? If there's a benevolent God watching us, how do you know He hasn't magically guided you to the career that best suits you? And so on. For geeks, believing in heaven is a lousy bargain, because it costs way too much in terms of practical navigation ability to be worth the warm fuzzy thoughts.
Enter...cryonics and friendly AI. Oh, look! Using only physical, reductionist-friendly mechanisms, we can show that a benevolent, powerful entity whose mind is not centered on any particular point in space (let's call it 'Sysop' instead of God) might someday start watching over us. As an added bonus, as long as we don't commit suicide by throwing our bodies into the dirt as soon as our hearts stop beating, we can wake up in the future using the power of cryonics! The future will be kinder, richer, and generally more fun than the present...much like magical happy sunshine-land is better than Earth.
Unlike pre-scientific religion, the "cryonics + Friendly AI" Sysop story is 'cheap' for people who rarely compartmentalize. You can believe in Sysop without needing to believe in anything that can't be explained in terms of charge, momentum, spin, and other fundamental physical properties. Like pre-scientific religion, the Sysop story is a whole lot of fun to think about and believe in. It makes you happy! That, in and of itself, doesn't make you wrong, but it is very important to stay aware of the true causes of your beliefs. If you came to believe a relatively strange and complicated idea because it made you happy, it is very unlikely that this same idea just happens to also be strongly entangled with reality.
Partisanship
As for dropping out of other religious communities, well, they're the quintessential bad guys, right? Not only do they believe in all kinds of unsubstantiated woo, they suck you into a dense network of personal relationships -- which we at Less Wrong want earnestly to re-create, just, you know, without any of the religion stuff. The less emotional attachment you have to your old community, the more you'll be free and available to help bootstrap ours!
Why should you spend all your time trying to get one of the first rationalist communities up and running (hard) instead of joining a pre-existing, respectable religious community (easy)? Well, to be fair, there are lots of good reasons. Depending on how rationalist you are, you might strongly prefer the company of other rationalists, both as people to be intimate with and as people to try to run committee meetings with. If you're naturally different enough from the mainstream, it could be more fun and less frustrating for you to just join up with a minority group, despite the extra effort needed to build it up.
There is a meme on Less Wrong, though, that rationalist communities are not just better-suited to the unique needs of rationalists, but also better in general. Rationality is the lens that sees its own flaws. We get along better, get fit faster, have more fun, and know how to do more things well. Through rationality, we learn to optimize everything in sight. Rationality should ultimately eat the whole world.
Again, you have to ask yourself: what are the odds that these beliefs are driven by valid evidence, as opposed to ordinary human instincts for supporting their own tribe and denigrating their neighbors? As Eliezer very fairly acknowledges, we don't even have decent metrics for measuring rationality itself, let alone for measuring the real-world effects that rationality supposedly has or will have on people's wealth, health, altruism, reported happiness, etc.
Do we *really* identify our own flaws and then act accordingly, or do we just accept the teachings of professional neuroscientists -- who may or may not be rationalists -- and invent just-so stories 'explaining' how our present or future conduct dovetails with those teachings? Take the "foveal blind spot" that tricks us into perceiving stars in the night sky as disappearing when you look straight at them. Do you (or anyone you know) really have the skill to identify a biological human flaw, connect it with an observed phenomenon, and then deviate from conventional wisdom on the strength of your analysis? If mainstream scientists believed that stars don't give off any light that strikes the Earth directly head-on, would you be able to find, digest, and apply the idea of foveal blind spots in order to prove them wrong? If not, do you still think that rationalist communities are better than other communities for intelligent but otherwise ordinary people? Why?
It would be really convenient if rationality, the meme-cluster that we most enjoy and are best-equipped to participate in, also happened to be the best for winning at life. In case it turns out that life is not quite so convenient, maybe we should be a little humbler about our grand experiment. Even if we have good reason to assert that mainstream religious thinking is flawed, maybe we should be slower to advise people to give up the health benefits (footnote 15) of belonging, emotionally, to one or another religious community.
Bullshit
Finally, suppose you publicly declared yourself to have nigh-on-magical powers -- by virtue of this strange thing that few people in your area understand, "rationality," you can make yourself smarter, more disciplined, more fun to be around, and generally awesome-r. Of course, rationality takes time to blossom -- everyone understands that; you make it clear. You do not presently have big angelic powers, it is just that you will get your hands on them soon.
A few months go by, maybe a year, and while you have *fascinating* insights into cognitive biases, institutional inefficiencies, and quantum physics, your friends either can't understand them or are bored to tears by your omphaloskepsis. You need to come up with something that will actually impress them, and soon, or you'll suffer from cognitive dissonance and might have to back off of your pleasurable belief that rationality is better than other belief systems.
Lo and behold, you discover the amazing benefits of chemical stimulants! Your arcane insights into the flaws of the institutional medical establishment and your amazing ability to try out different experimental approaches and faithfully record which ones work best have allowed you to safely take drugs that the lay world shuns as overly dangerous. These drugs do, in fact, boost your productivity, your apparent energy, and your mood. You appear to be smarter and more fun than those around you who are not on rationally identified stimulants. Chalk one up for rationality.
Unless, of course, the drugs have undesirable long-term or medium-term effects. Maybe you develop tolerance and have to take larger and larger doses. Maybe you wear out your liver or your kidneys, or lower your bone density. Maybe you mis-underestimate your ability to operate heavy machinery on polyphasic sleep cycles, and drive off the side of the road. Less Wrong is too young, as a meme cluster, for most of these hazards to have been triggered. I wouldn't bet on any one of those outcomes for any one drug...but if your answer to the challenges of life is to self-medicate, you're taking on a whole lot more risk than the present maturity of the discipline of rationality would seem to warrant.
Conclusion
I've tried my best not to frontally engage any of the internal techniques or justifications of rationality. On what Robin Hanson would call an inside view, rationality looks very, very attractive, even to me. By design, I have not argued here that, e.g., it is difficult to revive a frozen human brain, or that the FDA is the best judge of which drugs are safe.
What I have tried to do instead is imagine rationality and all its parts as a black box, and ask: what goes into it, and what comes out of it? What goes in is a bunch of smart nonconformists. What comes out, at least so far, is some strange advice. The advice pays off on kind of a bimodal curve: cryonics and SIAI pay off at least a decade in the future, if at all, whereas drugs and quitting religion offer excellent rewards now, but may involve heavy costs down the road.
The relative dearth of sustainable yet immediate behavioral payoffs coming out of the box leads me to suspect that the people who go into the box go there not so much to learn about superior behaviors, but to learn about superior beliefs. The main sense in which the beliefs are superior in terms of their ability to make tech/geek people think happy thoughts without 'paying' too much in bad outcomes.
I hold this suspicion with about 30% confidence. That's not enough to make me want to abandon the rationalist project -- I think, on balance, it still makes sense for us to try to figure this stuff out. It is enough for me to want to proceed more carefully. I would like to see an emphasis on low-hanging fruit. What can we safely accomplish this month? this year? I would like to see warnings and disclaimers. Instead of blithely informing everyone else of how awesome we are, maybe we should give a cheerful yet balanced description of the costs and benefits. It's OK to say that we think the benefits outweigh the costs...but, in a 2-minute conversation, the idea of costs should be acknowledged at least once. Finally, I would like to see more emphasis on testing and measuring rationality. I will work on figuring out ways to do this, and if anyone has any good measurement schemes, I will be happy to donate some of my money and/or time to support them.
Personally I gave up trying to take into account such considerations. Otherwise I would have to weigh the positive and negative effects of comments similar to yours according to influence they might have on existential risks. This quickly leads to chaos theoretic considerations like the butterfly effect which in turn leads to scenarios resembling Pascal's Mugging where tiny probabilities are being outweighed by vast utilities. As a computationally bounded and psychical unstable agent I am unable to cope with that. Consequently I decided to neglect small probability events.