I've tried my best not to frontally engage any of the internal techniques or justifications of rationality. On what Robin Hanson would call an inside view, rationality looks very, very attractive, even to me. By design, I have not argued here that, e.g., it is difficult to revive a frozen human brain, or that the FDA is the best judge of which drugs are safe.
To be honest, I think you should have. Meta-arguments for why the causes of our beliefs are suspect are never going to be as convincing as evidence for why the beliefs themselves are wrong.
Also, I think the parts about psychoactive drugs are somewhere between off-topic and a straw man. One of the posts you linked is titled "Coffee: When it helps, when it hurts"--two sides of an argument for a stimulant that probably a supermajority of adults use regularly. In another, 2 of the 18 suggestions offered involve substance use.
Thirdly, while rationality in the presence of akrasia does not have amazing effects on making us more effective, rationality does have one advantage that's been overlooked a lot lately: it results in true beliefs. Some people, myself included, value this for its own sake, and it is a real benefit.
Meta-arguments for why the causes of our beliefs are suspect are never going to be as convincing as evidence for why the beliefs themselves are wrong.
However, even if the beliefs are correct, many people will still accept them for the wrong reasons. These "meta-arguments" are powerful psychological forces, which affect all people.
I would suspect that LW has a small bunch of people who have arrived at LW:ish beliefs because of purely rational reasoning. There's a larger group that has arrived at the same beliefs ~purely because of human biases (including the factors listed in the post). And then there's a larger yet group that has arrived at them partially because of rational reasoning, and partially because of biases.
The advice, the tone, the vibe 'feels' wrong, somehow. If you forced me to use more precise language, I might say that, for several years now, I have kept a variety of procedural heuristics running in the background that help me ferret out bullshit, partisanship, wishful thinking, and other unsound debating tactics -- and important content on this website manages to trigger most of them.
To come up with a theory on the fly, maybe there are two modes of expansion for a group: by providing some service, and by sheer memetic virulence. One memetic virulence strategy operates by making outlandish promises that subscribing to it will make you smarter, richer, more successful, more attractive to the opposite sex, and just plain superior to other people - and then doing it in a way that can't obviously be proven wrong. This strategy usually involves people with loads of positive affect going around telling people how great their group is and how they need to join.
As a memetic defense strategy, people learn to identify this kind of spread and to shun groups that display its features. From the inside, this strategy manifests as a creepy feeling.
LW members have lots of positive affect arou...
we make outrageously bold claims about getting smarter and richer and sexier
I'd like to know where all this LW-boasting is going on. I don't think I hear it at the meetups in Mountain View, but maybe I've been missing something.
Darnit, I don't like being vague and I also don't like pointing to specific people and saying "YOU! YOU SOUND CULTISH!" so I'm going to have a hard time answering this question in a satisfying way, but...
Lots of people are looking into things like nootropics/intelligence amplification, entrepreneurship, and pick-up artistry. And this is great. What gives me the creepy vibe is when they say (more on the site than at meetups) "And of course, we'll succeed at these much faster than other people have, even though they are professionals in this field, because we're Rationalists and they weren't." Anything involving the words "winning", "awesomeness", or gratuitous overuse of community identification terms like "primate" or "utility".
Trying to look for examples, I notice it is a smaller proportion of things than I originally thought and I'm probably biased toward overcounting them, which makes sense since in order to justify my belonging to a slightly creepy group I need to exaggerate my opposition to the group's creepiness.
Nonetheless, perhaps we need to adopt a new anti-cultishness norm against boasting about the predicted success of rationalists; or against ascribing personal victories to one's rationality without having actually done the math to demonstrate the correlation between success and rationality. The cult attractor is pretty damn bad, after all, and ending up in it could easily destroy one hell of a lot of value.
One memetic virulence strategy operates by making outlandish promises that subscribing to it will make you smarter, richer, more successful, more attractive to the opposite sex, and just plain superior to other people - and then doing it in a way that can't obviously be proven wrong.
That similarity is the key to both the perceived creepiness factor and the signal:noise ratio on this site. Groups formed to provide a service have performance standards that their members must achieve and maintain: drama clubs and sports teams have tryouts, jobs have interviews, schools have GPA requirements, etc. By contrast, groups serving as vehicles for contagious memes avoid standards. Every believer, even if personally useless to the stated aims of the group, is a potential transmission vector.
I see two reasons to care which of those classes of groups LW more closely resembles: first, to be aware of how we're coming across to others; and second, as a measure of whether anything is actually being accomplished here.
Personally, I try to avoid packaging LW's community and content into an indivisible bundle. From Resist the Happy Death Spiral:
...To summarize, you do avoid a Happy Death Spiral by (1)
I'm confused by the claim that those four are "the most prominent themes in terms of short-term behavioral advice" around here.
When I think of advice given here, "use drugs!" doesn't seem to be in my top seven. Most of the advice I've heard around here, both from the Sequences and others, seems to be in what might be considered "epistemic hygiene" or good practices with regards to discovering and recognizing truth. This would include being aware of cognitive biases, noticing confusion, and so on. And many of these are indeed "short-term" advice, at least in the sense that some of them can be implemented very quickly.
(They're certainly more "short-term" than dropping out of a religious group would be, for a person who's actually involved in religion.)
What I think you might have in your list of four, there, is not a list of the most prominent themes here, but rather a list of some themes that worry you. And, as you note, they're worth worrying about — to a certain extent, these are themes that, if taken in the wrong direction, might participate in a cult attractor.
Avoiding becoming a cult is a recurring theme here. And as we know, you...
Honestly, I think the cluster of tech-savvy, young, smart-but-nonconformist types is really winning at the goal of being productive while happy. Not everybody makes it; but I've seen a lot of people have lives more satisfying than their parents ever could. People who've broken the conventional wisdom that you have to put up with a lot of bullshit because "that's life." Mainly, because instead of asking "What is the Thing To Do?" they've got the hang of asking "What is the best thing I could be doing?"
If cryonics is a bust, I'll grant that it's a genuine waste of money. The same is true for SIAI. (Though I'll mention that lots of otherwise fulfilled people donate to demonstrably inefficient charities that spend most of their money on employee salaries. Most middle-class people throw some money down the toilet and don't even notice it.) The other issues are not such a big deal. Leaving religious communities is not a blow to people who have figured out how to optimize life, because they aren't isolated any more. I don't even know if overuse of stimulants is that widespread -- I certainly know they aren't good for me.
As for having self-gratifying b...
Honestly, I think the cluster of tech-savvy, young, smart-but-nonconformist types is really winning at the goal of being productive while happy.
As a general rule, nonconformists aren't happy: they must choose between hiding their nonconformity and living a double life, which is never a happy situation, or being open nonconformists and suffering severe penalties for it. What you have in mind would probably be better described as people who know how to send off fashionable signals of officially approved pseudo-nonconformity, and to recognize and disregard rules that are only paid lip-service (and irrelevant except as a stumbling block for those not smart enough to realize it), but are perfect and enthusiastic conformists when it comes to things that really matter.
The key to successful non-conformity is to find your tribe later. If you look at people who've done this now, they seem like conformists, because they do what their peer-group does. But they've fit their peer-group to their personality, rather than trying to fit their personality to their peer-group. They've had to move through local minima of non-conformity.
Here are some examples of where I've made what have at the time been socially brave choices that have paid off big. This is exactly all about asking "what is the best thing I could be doing", not "what is the thing to do".
Decided to accept and admit to my bisexuality. This was very uncomfortable at first, and I never did really find a "home" in gay communities, as they conformed around a lot of norms that didn't suit me well. What accepting my sexuality really bought me is a critical stance on masculinity. Rejecting the normal definition of "what it means to be a man" has been hugely liberating. Being queer has a nice signalling perk on this, too. It's much harder to be straight and get away with this. If you're queer people shrug and put you in that "third sex" category of ne
Almost everything's fashionable to someone, somewhere. You can start with a certain in-group and non-conform by deciding to eat meat. You can non-conform out of the gay community by deciding you're actually straight.
The issue of conformity arose in this thread from SarahC's comment:
Honestly, I think the cluster of tech-savvy, young, smart-but-nonconformist types is really winning at the goal of being productive while happy. Not everybody makes it; but I've seen a lot of people have lives more satisfying than their parents ever could. People who've broken the conventional wisdom that you have to put up with a lot of bullshit because "that's life." Mainly, because instead of asking "What is the Thing To Do?" they've got the hang of asking "What is the best thing I could be doing?"
I think this really applies to me. My assessment of my life is that I'm much happier because of these moments where I've exercised even a little bit of courage in the face of social pressure. It wasn't a huge amount of courage, but it was non-zero --- which is more than many people are willing to do. I do believe that being utterly craven in the face of social opprobrium is a common failure mode, and it's an area where rationality pays dividends.
It seems to me that every human society has some romantic notion of heroic rebels and nonconformists, but for reasons that are interesting to speculate on, ours is obsessed with it to a very exceptional degree. (So much that people nowadays typically use the word "nonconformist" with a tone of approval, and rarely for those who fail to conform with norms and views that they themselves actually like.) This opens the way for people to gain status if they are capable of doing things that signal in a way that resonates with this heroic "nonconformist" image, while at the same time avoiding any really dangerous nonconformity.
Take for example all those artists and authors who get praised as "daring," "transgressive", "challenging taboos," etc., even though the things they do have been run-of-the-mill for many decades (or even much longer), the views they express (insofar as they express any) are entirely predictable for anyone familiar with the respectable intellectual mainstream, their high status is acknowledged by the mainstream media and academia, and some of them even get rich off of this "nonconformity." There are many ot...
The bit about drugs is just stupefying. Did you really, really, mean what came out?
"Lots and lots of people on Less Wrong love drugs that are outlawed in the U.S., use them all the time for the explicit purpose of intelligence stimulation, and refuse to hear anything about their harmful effects, because Less Wrongers are extremely quick to explain away evidence they don't want to believe in - especially when it's supported by "uncool" people and groups - and probably can't even contemplate any long-term effects due to their geekishness and hidden immaturity. Here's a wise, fatherly-sounding warning to them, full of ol' good common sense that those naive kids haven't learned to trust yet."
I'm not voting this down because I'm just feeling a flat what.
To be fair to LessWrong, although we do encourage quitting religion, we don't condemn attending. This post got 44 upvotes, and a decent chunk of the post was explaining how she went to church. I personally think the "don't attend church" mentality is more about the path being closed to us than anything against it.
these are four of the seven most important themes on the site in terms of immediate advice about what to do
What are the other three? And shouldn't there be an explanation why they are excluded from this outside view analysis? (EDIT: See Mass_Driver's explanation here.)
let's call it 'Omega' instead of God
Please call it something else? Using 'Omega' seems unnecessarily confusing given that there's already a convention for using that name to denote a powerful and trustworthy (but not necessarily Friendly) entity in decision theory problems.
site:lesswrong.com "artificial intelligence" = 30,700 results site:lesswrong.com "Singularity" = 32,000 results
Thought this was because of the logo at the top of the page, so searched for "Singularity Institute for Artificial Intelligence" and got:
So something's weird. Also, if you move "site:lesswrong.com" to the right side you get 116,000 instead.
Google's result counter is an estimate, and not a very good one. It's within 2 or 3 orders of magnitude... usually.
It would be really convenient if rationality, the meme-cluster that we most enjoy and are best-equipped to participate in, also happened to be the best for winning at life.
I think this is the strongest point in the whole argument.
Data point: I brought my parents to a Mountain View LW meetup. My parents aren't religious, and my dad is a biochemist that studies DNA repair mechanisms; they define themselves by their skepticism and emphasis on science. So the perfect target audience. But they seemed unenthusiastic, that it was by and for tech-savvy smart young adults but not really for the population as a whole.
This is the most coherent argument I\ve seen against memeticizing Less Wrong. Thank you.
You seem to be making the point that our[1] recommendation of cryonics facilitates an unfounded belief that one day there will be a benevolent superintelligence that will revive the corpsicle patients. I think that criticism could be appropriately aimed at zealous and silly transhumanists, but not at Less Wrong. Here you will be told that signing up for cryonics gives you only a 5% chance at living forever. You'll be told that there's a pretty good chance of superintelligence existing in the future, but there are at least even odds of it being not benevolent. And Eliezer, who came up with the Sysop scenario in the first place, explicitly warned against wasting time thinking about such things. You won't find that kind of shiny eschatology here.
[1] It's fair to say that Less Wrong advises signing up for cryonics, although there isn't a consensus on this point.
Drugs aren't a big part of this site; there may be a few members who recommend some chemical stimulants, but it's far from being a consensus. If you asked me to list important and useful ideas and advice from LessWrong, I don't think I would list "use drugs" except maybe in 100th position or so.
As for quitting religion, I recall seeing anybody actually recommend that people drop out of religious groups (though there may be some - any links?); it's just that some people have done so as a result of just not believing in religion any more.
If you believe we live in a universe where most things are possible, you will focus on the best things and how to achieve them, and the worst things and how to avoid them.
Separately, if you want to construct a highly viral religion meme, you will focus on the best things and how to achieve them, and the worst things and how to avoid them.
Taking the really awesome ideas of religions (God, afterlife) and figuring out the most plausible scientific explanation for them is exactly what we should be doing, since we want to maximize the probability of God and afterlife.
Enter...cryonics and friendly AI. Oh, look! Using only physical, reductionist-friendly mechanisms, we can show that a benevolent, powerful entity whose mind is not centered on any particular point in space and whose existence cannot presently be confirmed (let's call it 'Omega' instead of God) might someday be watching over us.
I see analogies with three religious tropes here: the omnipresence of god, religions claim to be non-disprovable and the tendency of religions to give their gods cool-sounding names. The last one is simply confused ('Omega' is the designation of a perfect and trustworthy predictor postulated in philosophical thought-experiments which are quite different from speculations about the possibility of building an artificial intelligence). The middle one vaguely misleading -- the presence of a powerful and benevolent entity of the AI persuasion in the vicinity of Earth can presently be thoroughly disconfirmed and I've never seen anyone claiming otherwise or hedging about it. The first one has some loose connection to the ways things have been discussed around here but I still wouldn't call it a good characterization of beliefs common on this site.
Seriously, why is it that people have to get all strawman-y and hyperbolic whenever they talk about the obvious similarities between transhumanist ideas and religious thought?
I hold this suspicion with about 30% confidence, which is enough to worry me, since I mostly identify as a rationalist. What do you think about all this? How confident are you?
I think the recent surge in meetups shows that people are mainly interested to group with other people who think like them rather than rationality in and of itself. There is too much unjustified agreement here to convince me that people really mostly care about superior beliefs. Sure, the available methods might not allow much disagreement about their conclusions, but what about doubt in the very methods that are used to evaluate what to do?
Most of the posts on LW are not wrong, but many exhibit some sort of extraordinary idea. Those ideas seems mostly sound but if you take all of them together and arrive at something really weird, I think some skepticism is appropriate (at least more than can currently be found).
Here is an example:
1.) MWI
The many-worlds interpretation seems mostly justified, probably the rational choice of all available interpretations (except maybe Relational Quantum Mechanics). How to arrive at this conclusion is also a good exercise in refining the art of rationality.
2.) Belief in th...
You know, I think a lot of this stuff really misses the mark. I would say that I agree with many of the LW "mainstream" beliefs, generally find my posts being upvoted, have attended meetups before and enjoyed myself, and so on-- but I've never tried nootropics, I think cryonics is an expensive way to buy optimism and signaling, I'm fairly sympathetic to religious groups, and I've said so explicitly several times without any real fear of retaliation or even downvoting.
As long as you express your opinions in a reasonable, self-reflective, and well thought out way, I've found you have nothing to worry about here, and that's really not the case in most other communities I've participated in. What are the "heresies" of the LW/human rationality community? It's hard to say.
Eliezer will not make you abandon your friends and family, run away to a far-off mountain retreat and drink poison Kool-Aid.
A post by Roko came to my mind (it all went down the road of insanity after that with other people suffering as a result):
I personally have suffered, as have many, from low-level punishment from and worsening of relationships with my family, and social pressure from friends; being perceived as weird. I have also become more weird - spending one's time optimally for social status and personal growth is not at all like spending one's time in a way so as to reduce existential risks. Furthermore, thinking that the world is in grave danger but only you and a select group of people understand makes you feel like you are in a cult due to the huge cognitive dissonance it induces.
Although if all works out well with those 'rationality camps', or whatever they are called, this might not be a problem anymore.
I just want to say thank you for posting to /r/discussion.
This kind of posting workflow is something I've tried to encourage through advice on the IRC channel and hope more people adopt it because I see a lot of potential in it. Namely, people that might not be totally ready for front page posting can get good feedback, learn a lot, and then LW winds up with more high quality articles than it would have otherwise. The more quality writing for LW, the better.
This is what I'd like to see more of!
Upvoted.
The relative dearth of sustainable yet immediate behavioral payoffs coming out of the box leads me to suspect that the people who go into the box go there not so much to learn about superior behaviors, but to learn about superior beliefs. The main sense in which the beliefs are superior in terms of their ability to make tech/geek people think happy thoughts without 'paying' too much in bad outcomes.
Presumably there's at least some of this going on. But there's not an "either/or" dichotomy here. Some of the Less Wrong advice will turn out to fall into the above and other such advice will turn out to be solidly grounded.
For example, I think that more likely than not, focus on x-risk reduction as a philanthropic cause is grounded and that this is something that the LW community has gotten right but that more likely than not, donating to SIAI is not the best x-risk reduction opportunity on the table. I'm bothered by the fact that it appears to me that most SIAI supporters have not carefully considered the collection of all x-risk opportunities on the table with a view toward picking out the best one; a priori it seems that the one that's most salient initially is unlikely to be the best one altogether. (That being said, contingencies may point toward SIAI being the best possible option even after an analysis of all available options.)
I think some people expect too much too soon. Here's what I think it's reasonable to expect, short-term: (1) improved problem solving skills; (2) a clearer idea of what it will take to achieve your goals; and (3) worthwhile interaction with a community of peers. A lot of problems are hard. Psychology and sociology are difficult, unsolved subjects. I don't expect rationalists to become wealthy, highly accomplished and socially successful short-term, because systematically achieving those goals would require a high-level of knowledge about how the social world operates. I would expect them to have a better idea of how much work would be involved in achieving those goals and to be able to make progress on more modest goals.
What bugs me about your perception of this community is that you seem to conflate goals with beliefs. What I see on LessWrong is the idea that artificial general intelligence, if done properly, would be a powerful invention that could solve the most important problems of humanity and that therefore we should pursue the goal of inventing and building it.
What you seem to see is the idea that because we thougth of a way in which future could be awesome, therefore it will be awesome and we can feel good about it just like other people feel good about religion. I just don't see it. Maybe it's because I used to have that kind of vague feel-good transhumanist beliefs and then I stumbled upon Eliezer's writings and got convinced that no, I have no reason to relax and believe that powerful, abstract forces of technological progress will make everything work out in the end. So it's surprising to me that anyone could end up with that kind of overly enthusiastic beliefs because of LessWrong.
This discrepancy of perception extends to your depiction of community-building efforts. Once again there's the goal of doing everything better, having the most fun and being awesome and the belief that we are already there and can feel good about ourselves. But here I'm far less willing to trust my percepions. I don't really interact with the community beyond reading the website and I tend to ignore things that don't appeal to me so I might have filtered out this unfortunate aspect of the local memesphere.
The relative dearth of sustainable yet immediate behavioral payoffs coming out of the box leads me to suspect that the people who go into the box go there not so much to learn about superior behaviors, but to learn about superior beliefs.
Bingo.
Excellent analysis throughout, btw, but that bit hits it right on the head.
(In fairness, though, I think it should be pointed out that there's plenty of other good advice to be found on LW. It's only natural to expect that the most popular memes would be ones that have more going for them than mere truth or usefulness.)
It has always seemed like your ideas on how to learn superior behaviors are a pretty significant part of the LW memecluster.
Thanks for the post. Now I can pat myself on the back for reading and upvoting a post critical of my beliefs and then go back to doing what I was doing before. ;)
Does Less Wrong really recommend withdrawing from religious groups? I don't see that recommendation in any of the four links you give as support. Less Wrong will tell you that religions' supernatural claims are false wherever they're meaningful, and that a lot of religious beliefs are harmful as well as false. And it will tell you to be an atheist[1]. It's understandable that most of us who realized at some point that God isn't real decided to stop going to church. But some of us are involved with religious groups and it doesn't seem to be problematic.
[1] ...
On the outside view, this rationality community is very young, and most young organizations lack sophistication, easily repeatable methods, and proof of whatever they claim. Changing yourself takes lots of time (on the order of years), if it can be done at all (it can be, but it's not particularly easy).
On the outside view, any organization which dissolves for lack of proof in its methods takes a very very long time to arise and stay, or never gets off the ground.
I really think that the issue is more one of time and organization, and I'm not super surprised that Less Wrong isn't obviously able to deliver what it wants to over the internet.
The US Peace Corps prompted over 200,000 people to do something I consider even more extreme by committing to multiple years of service in foreign countries for very modest goals. I'd be surprised if something like Existential Risk didn't provoke such a reaction.
Minicamp, no, because it was so skills-focused. There was a real sense that we could apply the skills to any goals that seemed interesting to us. Meetups, yes. Many of the meetups I've been to have involved praise competitions, i.e., let's see who can all suck up to the SIAI more intelligently.
As far as I can tell, the most prominent themes in terms of short-term behavioral advice being given on Less Wrong are:
1) Sign up for cryonics,
2) Donate to SIAI,
3) Drop out of any religious groups you might belong to, and
4) Take chemical stimulants.
If that's the case, then I find it worrying - and seeing how unacceptable to myself personally I find points 1, 2 and 4, it may just make me rethink about my presence here, and whether I'm trying to fit with the wrong crowd.
Yvain suggests that something about the rapid spread of positive affect not obviously tied to any concrete accomplishments may be stimulating a sort of anti-viral memetic defense system.
I think there is merit in this suggestion, or at least along the lines of "there's a (instrumentally rational in at least some circumstances) mimetic immune reaction going on". I've seen a fellow cryonics advocate (who I gather has a substantial amount of business experience) advancing the opinion that Eliezer and SIAI are phony. He's concerned that the whole t...
It would be really convenient if rationality, the meme-cluster that we most enjoy and are best-equipped to participate in, also happened to be the best for winning at life.
As I've seen it used here, "rationality" most commonly refers to "the best [memecluster] for winning at life" whatever that actual memecluster may be. If it could be shown that believing in the christian god uniformly improved or did not affect every aspect of believers lives regardless of any other beliefs held, I think a majority of lesswrongers would take every...
Unlike pre-scientific religion, the "cryonics + Friendly AI" Sysop story is 'cheap' for people who rarely compartmentalize. [...] It makes you happy!
AI makes me very very afraid, and sad.
As for dropping out of other religious communities, well, they're the quintessential bad guys, right? Not only do they believe in all kinds of unsubstantiated woo, they suck you into a dense network of personal relationships -- which we at Less Wrong want earnestly to re-create, just, you know, without any of the religion stuff.
Churches have art. I like art.
...There is a meme on Less Wrong, though, that rationalist communities are not just better-suited to the unique needs of rationalists, but also better in general...back off of your pleasurable belief
By cheaply, I mean that the beliefs won't really hurt you...it's relatively safe to believe in them.
They don't seem too "cheap" to me. We are potentially talking about many thousands of dollars.
whereas drugs and quitting religion offer excellent rewards now, but may involve heavy costs down the road.
What long-term costs would quitting religion have?
ETA: The answer is presumably in the post:
maybe we should be slower to advise people to give up the health benefits (footnote 15) of belonging, emotionally, to one or another religious community.
I think your "Partisanship" section is your strongest point. The question of whether our rationality shindig actually helps people is a good question. Similar points have been made before.
Stars twinkle because of the atmosphere's slightly fluctuating refractive properties (compare to mirages). I'm sure you can notice dim stars disappearing when you look straight at them, but I'm going to keep the atmosphere story for now - even though the only way I've tested it is to compare with planets (whose images are disclike rather than pointlike).
See any number of google hits on "why stars twinkle" e.g. http://astroprofspage.com/archives/1168
Note: I'm typing this without looking at other comments because I judge that it would be really easy for one of the better-sounding argumants to hijack my train of thought, leaving my previous thoughts to be crushed under the wheels of the huge locomotive.
I'm going to do my own 'black box' treatment of rationality, trying to figure out what I've actually got from it in list form.
This part is totally unfair:
1) Sign up for cryonics,
2) Donate to SIAI,
3) Drop out of any religious groups you might belong to, and
4) Take chemical stimulants.
Guess I don't have to worry then.
Everything here has been the opposite of cheap. I don't even have a memory of what it's like to have a goal other than sacrifice everything for making microscopic changes in the probabilities of distant abstract outcomes. I don't even remember what goals this brain used to have in the way of goals before being rewritten.
I disallow myself from thinking that kind of pleasurable though for exactly that reason, instead thinking of things I don't think could happen if I feel tempted.
Never had any religious group to drop out...
I'm an interesting data point in the context of this article. I accept the LW-mainstream cryonics analysis, but I am not signed up and I do not intend to do so. I also do not plan for events after the singularity in order to prevent excessive optimism from causing bias, though I started this because it is very difficult to form such plans and only later noticed this additional benefit.
My perception of this advice is that it is general, and that it is the individual's responsibility to determine if in the context of their lives this advice will have more benefit than cost.
Related to: Intellectual Hipsters, X-Rationality: Not So Great, The Importance of Self-Doubt, That Other Kind of Status,
This is a scheduled upgrade of a post that I have been working on in the discussion section. Thanks to all the commenters there, and special thanks to atucker, Gabriel, Jonathan_Graehl, kpreid, XiXiDu, and Yvain for helping me express myself more clearly.
-------------------
For the most part, I am excited about growing as a rationalist. I attended the Berkeley minicamp; I play with Anki cards and Wits & Wagers; I use Google Scholar and spreadsheets to try to predict the consequences of my actions.
There is a part of me, though, that bristles at some of the rationalist 'culture' on Less Wrong, for lack of a better word. The advice, the tone, the vibe 'feels' wrong, somehow. If you forced me to use more precise language, I might say that, for several years now, I have kept a variety of procedural heuristics running in the background that help me ferret out bullshit, partisanship, wishful thinking, and other unsound debating tactics -- and important content on this website manages to trigger most of them. Yvain suggests that something about the rapid spread of positive affect not obviously tied to any concrete accomplishments may be stimulating a sort of anti-viral memetic defense system.
Note that I am *not* claiming that Less Wrong is a cult. Nobody who runs a cult has such a good sense of humor about it. And if they do, they're so dangerous that it doesn't matter what I say about it. No, if anything, "cultishness" is a straw man. Eliezer will not make you abandon your friends and family, run away to a far-off mountain retreat and drink poison Kool-Aid. But, he *might* convince you to believe in some very silly things and take some very silly actions.
Therefore, in the spirit of John Stuart Mill, I am writing a one-article attack on much of we seem to hold dear. If there is anything true about what I'm saying, you will want to read it, so that you can alter your commitments accordingly. Even if, as seems more likely, you don't believe a word I say, reading a semi-intelligent attack on your values and mentally responding to it will probably help you more clearly understand what it is that you do believe.
Wishful Thinking
As far as I can tell, some of the most prominent themes in terms of short-term behavioral advice being given on Less Wrong are:
1) Sign up for cryonics,
2) Donate to SIAI,
3) Drop out of any religious groups you might belong to, and
4) Take chemical stimulants.
I don't mean to imply that this is the *only* advice given, or even that these are the four most important ones. Rather, I claim that these four topics, taken together, account for a large share of the behavioral advice dispensed here. I predict that you would find it difficult or impossible to construct a list of four other pieces of behavioral advice such that people would reliably say that your list is more fairly representative of the advice on Less Wrong. As XiXiDu was kind enough to put it, there is numerical evidence to suggest that my list is "not entirely unfounded."
The problem with this advice is that, for certain kinds of tech/geek minds, the advice is extremely well-optimized for cheaply supporting pleasurable yet useless beliefs -- a kind of wireheading that works on your prefrontal cortex instead of directly on your pleasure centers.
By cheaply, I mean that the beliefs won't really hurt you...it's relatively safe to believe in them. If you believe that traffic in the U.S. drives on the left-hand-side of the street, that's a very expensive belief; no matter happy you are thinking that you, and only you, know the amazing secret of LeftTrafficIsm, you won't get to experience that happiness for very long, because you'll get into an auto accident by tomorrow at the latest. By contrast, believing that your vote in the presidential primaries makes a big difference to the outcome of the election is a relatively cheap belief. You can go around for several years thinking of yourself as an important, empowered, responsible citizen, and all it costs you is a few hours (tops) of waiting in line at a polling station. In both cases, you are objectively and obviously wrong -- but in one case, you 'purchase' a lot of pleasure with a little bit of wrongness, and in the other case, you purchase a little bit of pleasure with a tremendous amount of wrongness.
Among the general public, one popular cheap belief to 'buy' is that a benevolent, powerful God will take you away to magical happy sunshine-land after you die, if and only if you're a nice person who doesn't commit suicide. As it's stated, indulging in that belief doesn't cost you much in terms of your ability to achieve your other goals, and it gives you something pleasant to think about. This belief is unpopular with the kind of people who are attracted to Less Wrong, even before they get here, because we are much less likely to compartmentalize our beliefs.
If you have a sufficiently separate compartment for religion, you can believe in heaven without much affecting your belief in evolution. God's up there, bacteria are down here, and that's pretty much the end of it. If you have an integrated, physical, reductionist model of the Universe, though, believing in heaven would be very expensive, because it would undermine your hard-won confidence in lots of other practically useful beliefs. If there are spirits floating around in Heaven somewhere, how do you know there aren't spirits in your water making homeopathy work? If there's a benevolent God watching us, how do you know He hasn't magically guided you to the career that best suits you? And so on. For geeks, believing in heaven is a lousy bargain, because it costs way too much in terms of practical navigation ability to be worth the warm fuzzy thoughts.
Enter...cryonics and friendly AI. Oh, look! Using only physical, reductionist-friendly mechanisms, we can show that a benevolent, powerful entity whose mind is not centered on any particular point in space (let's call it 'Sysop' instead of God) might someday start watching over us. As an added bonus, as long as we don't commit suicide by throwing our bodies into the dirt as soon as our hearts stop beating, we can wake up in the future using the power of cryonics! The future will be kinder, richer, and generally more fun than the present...much like magical happy sunshine-land is better than Earth.
Unlike pre-scientific religion, the "cryonics + Friendly AI" Sysop story is 'cheap' for people who rarely compartmentalize. You can believe in Sysop without needing to believe in anything that can't be explained in terms of charge, momentum, spin, and other fundamental physical properties. Like pre-scientific religion, the Sysop story is a whole lot of fun to think about and believe in. It makes you happy! That, in and of itself, doesn't make you wrong, but it is very important to stay aware of the true causes of your beliefs. If you came to believe a relatively strange and complicated idea because it made you happy, it is very unlikely that this same idea just happens to also be strongly entangled with reality.
Partisanship
As for dropping out of other religious communities, well, they're the quintessential bad guys, right? Not only do they believe in all kinds of unsubstantiated woo, they suck you into a dense network of personal relationships -- which we at Less Wrong want earnestly to re-create, just, you know, without any of the religion stuff. The less emotional attachment you have to your old community, the more you'll be free and available to help bootstrap ours!
Why should you spend all your time trying to get one of the first rationalist communities up and running (hard) instead of joining a pre-existing, respectable religious community (easy)? Well, to be fair, there are lots of good reasons. Depending on how rationalist you are, you might strongly prefer the company of other rationalists, both as people to be intimate with and as people to try to run committee meetings with. If you're naturally different enough from the mainstream, it could be more fun and less frustrating for you to just join up with a minority group, despite the extra effort needed to build it up.
There is a meme on Less Wrong, though, that rationalist communities are not just better-suited to the unique needs of rationalists, but also better in general. Rationality is the lens that sees its own flaws. We get along better, get fit faster, have more fun, and know how to do more things well. Through rationality, we learn to optimize everything in sight. Rationality should ultimately eat the whole world.
Again, you have to ask yourself: what are the odds that these beliefs are driven by valid evidence, as opposed to ordinary human instincts for supporting their own tribe and denigrating their neighbors? As Eliezer very fairly acknowledges, we don't even have decent metrics for measuring rationality itself, let alone for measuring the real-world effects that rationality supposedly has or will have on people's wealth, health, altruism, reported happiness, etc.
Do we *really* identify our own flaws and then act accordingly, or do we just accept the teachings of professional neuroscientists -- who may or may not be rationalists -- and invent just-so stories 'explaining' how our present or future conduct dovetails with those teachings? Take the "foveal blind spot" that tricks us into perceiving stars in the night sky as disappearing when you look straight at them. Do you (or anyone you know) really have the skill to identify a biological human flaw, connect it with an observed phenomenon, and then deviate from conventional wisdom on the strength of your analysis? If mainstream scientists believed that stars don't give off any light that strikes the Earth directly head-on, would you be able to find, digest, and apply the idea of foveal blind spots in order to prove them wrong? If not, do you still think that rationalist communities are better than other communities for intelligent but otherwise ordinary people? Why?
It would be really convenient if rationality, the meme-cluster that we most enjoy and are best-equipped to participate in, also happened to be the best for winning at life. In case it turns out that life is not quite so convenient, maybe we should be a little humbler about our grand experiment. Even if we have good reason to assert that mainstream religious thinking is flawed, maybe we should be slower to advise people to give up the health benefits (footnote 15) of belonging, emotionally, to one or another religious community.
Bullshit
Finally, suppose you publicly declared yourself to have nigh-on-magical powers -- by virtue of this strange thing that few people in your area understand, "rationality," you can make yourself smarter, more disciplined, more fun to be around, and generally awesome-r. Of course, rationality takes time to blossom -- everyone understands that; you make it clear. You do not presently have big angelic powers, it is just that you will get your hands on them soon.
A few months go by, maybe a year, and while you have *fascinating* insights into cognitive biases, institutional inefficiencies, and quantum physics, your friends either can't understand them or are bored to tears by your omphaloskepsis. You need to come up with something that will actually impress them, and soon, or you'll suffer from cognitive dissonance and might have to back off of your pleasurable belief that rationality is better than other belief systems.
Lo and behold, you discover the amazing benefits of chemical stimulants! Your arcane insights into the flaws of the institutional medical establishment and your amazing ability to try out different experimental approaches and faithfully record which ones work best have allowed you to safely take drugs that the lay world shuns as overly dangerous. These drugs do, in fact, boost your productivity, your apparent energy, and your mood. You appear to be smarter and more fun than those around you who are not on rationally identified stimulants. Chalk one up for rationality.
Unless, of course, the drugs have undesirable long-term or medium-term effects. Maybe you develop tolerance and have to take larger and larger doses. Maybe you wear out your liver or your kidneys, or lower your bone density. Maybe you mis-underestimate your ability to operate heavy machinery on polyphasic sleep cycles, and drive off the side of the road. Less Wrong is too young, as a meme cluster, for most of these hazards to have been triggered. I wouldn't bet on any one of those outcomes for any one drug...but if your answer to the challenges of life is to self-medicate, you're taking on a whole lot more risk than the present maturity of the discipline of rationality would seem to warrant.
Conclusion
I've tried my best not to frontally engage any of the internal techniques or justifications of rationality. On what Robin Hanson would call an inside view, rationality looks very, very attractive, even to me. By design, I have not argued here that, e.g., it is difficult to revive a frozen human brain, or that the FDA is the best judge of which drugs are safe.
What I have tried to do instead is imagine rationality and all its parts as a black box, and ask: what goes into it, and what comes out of it? What goes in is a bunch of smart nonconformists. What comes out, at least so far, is some strange advice. The advice pays off on kind of a bimodal curve: cryonics and SIAI pay off at least a decade in the future, if at all, whereas drugs and quitting religion offer excellent rewards now, but may involve heavy costs down the road.
The relative dearth of sustainable yet immediate behavioral payoffs coming out of the box leads me to suspect that the people who go into the box go there not so much to learn about superior behaviors, but to learn about superior beliefs. The main sense in which the beliefs are superior in terms of their ability to make tech/geek people think happy thoughts without 'paying' too much in bad outcomes.
I hold this suspicion with about 30% confidence. That's not enough to make me want to abandon the rationalist project -- I think, on balance, it still makes sense for us to try to figure this stuff out. It is enough for me to want to proceed more carefully. I would like to see an emphasis on low-hanging fruit. What can we safely accomplish this month? this year? I would like to see warnings and disclaimers. Instead of blithely informing everyone else of how awesome we are, maybe we should give a cheerful yet balanced description of the costs and benefits. It's OK to say that we think the benefits outweigh the costs...but, in a 2-minute conversation, the idea of costs should be acknowledged at least once. Finally, I would like to see more emphasis on testing and measuring rationality. I will work on figuring out ways to do this, and if anyone has any good measurement schemes, I will be happy to donate some of my money and/or time to support them.