A common question here is how the LW community can grow more rapidly. Another is why seemingly rational people choose not to participate.

I've read all of HPMOR and some of the sequences, attended a couple of meetups, am signed up for cryonics, and post here occasionally. But, that's as far as I go. In this post, I try to clearly explain why I don't participate more and why some of my friends don't participate at all and have warned me not to participate further.

  • Rationality doesn't guarantee correctness. Given some data, rational thinking can get to the facts accurately, i.e. say what "is". But, deciding what to do in the real world requires non-rational value judgments to make any "should" statements. (Or, you could not believe in free will. But most LWers don't live like that.) Additionally, huge errors are possible when reasoning beyond limited data. Many LWers seem to assume that being as rational as possible will solve all their life problems. It usually won't; instead, a better choice is to find more real-world data about outcomes for different life paths, pick a path (quickly, given the time cost of reflecting), and get on with getting things done. When making a trip by car, it's not worth spending 25% of your time planning to shave off 5% of your time driving. In other words, LW tends to conflate rationality and intelligence.

  • In particular, AI risk is overstated There are a bunch of existential threats (asteroids, nukes, pollution, unknown unknowns, etc.). It's not at all clear if general AI is a significant threat. It's also highly doubtful that the best way to address this threat is writing speculative research papers, because I have found in my work as an engineer that untested theories are usually wrong for unexpected reasons, and it's necessary to build and test prototypes in the real world. My strong suspicion is that the best way to reduce existential risk is to build (non-nanotech) self-replicating robots using existing technology and online ordering of materials, and use the surplus income generated to brute-force research problems, but I don't know enough about manufacturing automation to be sure.

  • LW has a cult-like social structure. The LW meetups (or, the ones I experienced) are very open to new people. Learning the keywords and some of the cached thoughts for the LW community results in a bunch of new friends and activities to do. However, involvement in LW pulls people away from non-LWers. One way this happens is by encouraging contempt for less-rational Normals. I imagine the rationality "training camps" do this to an even greater extent. LW recruiting (hpmor, meetup locations near major universities) appears to target socially awkward intellectuals (incl. me) who are eager for new friends and a "high-status" organization to be part of, and who may not have many existing social ties locally.

  • Many LWers are not very rational. A lot of LW is self-help. Self-help movements typically identify common problems, blame them on (X), and sell a long plan that never quite achieves (~X). For the Rationality movement, the problems (sadness! failure! future extinction!) are blamed on a Lack of Rationality, and the long plan of reading the sequences, attending meetups, etc. never achieves the impossible goal of Rationality (impossible because "is" cannot imply "should"). Rationalists tend to have strong value judgments embedded in their opinions, and they don't realize that these judgments are irrational.

  • LW membership would make me worse off. Though LW membership is an OK choice for many people needing a community (joining a service organization could be an equally good choice), for many others it is less valuable than other activities. I'm struggling to become less socially awkward, more conventionally successful, and more willing to do what I enjoy rather than what I "should" do. LW meetup attendance would work against me in all of these areas. LW members who are conventionally successful (e.g. PhD students at top-10 universities) typically became so before learning about LW, and the LW community may or may not support their continued success (e.g. may encourage them, with only genuine positive intent, to spend a lot of time studying Rationality instead of more specific skills). Ideally, LW/Rationality would help people from average or inferior backgrounds achieve more rapid success than the conventional path of being a good student, going to grad school, and gaining work experience, but LW, though well-intentioned and focused on helping its members, doesn't actually create better outcomes for them.

  • "Art of Rationality" is an oxymoron.  Art follows (subjective) aesthetic principles; rationality follows (objective) evidence.

I desperately want to know the truth, and especially want to beat aging so I can live long enough to find out what is really going on. HPMOR is outstanding (because I don't mind Harry's narcissism) and LW is is fun to read, but that's as far as I want to get involved. Unless, that is, there's someone here who has experience programming vision-guided assembly-line robots who is looking for a side project with world-optimization potential.

Why I Am Not a Rationalist, or, why several of my friends warned me that this is a cult
New Comment
193 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]pjeby620

I've read all of HPMOR and some of the sequences, attended a couple of meetups, am signed up for cryonics, and post here occasionally. But, that's as far as I go.

That's further than I go. Heck, what else is there, and why worry about whether you're going there or not?

I have also translated the Sequences, and organized a couple of meetups. :)

Here are some other things someone could do to go further:

  • organize a large international meetup;
  • rewrite the Sequences in a form more accessible for general public;
  • give a lecture about LW-style rationality at local university;
  • sign up your children for cryonics;
  • join a polyamorous community;
  • start a local polyamorous community;
  • move to Bay Area;
  • join MIRI;
  • join CFAR;
  • support MIRI and/or CFAR financially;
  • study the papers published by MIRI;
  • cooperate with MIRI to create more papers;
  • design a new rationality lesson;
  • build a Friendly AI.

Actually, PJ, I do consider your contributions to motivation and fighting akrasia very valuable. I wish they could someday become a part of an official rationality training (the hypothetical kind of training that would produce visible awesome results, instead of endless debates whether LW-style rationality actually changes something).

[-][anonymous]380
  • join a polyamorous community;
  • start a local polyamorous community;

Seriously? What does that have to do with anything?

6Algernoq
I agree, I don't see how polyamory or MIRI's research can be called "less wrong" than the alternatives. A common LW belief is that polyamory is a better way to have relationships for most people. I disagree. I see how polyamory is the "best" way for a selfish, pleasure-seeking child-free high-status leader to have relationships.
[-]ephion120

In my experience with the LW community, they see polyamory as an equally valid alternative to monogamy. Many practice, many don't, and poly people include those with children and those without.

Affirm. It touches on cognitive skills only insofar as mild levels of "resist conformity" and "notice what your emotions actually are" are required for naturally-poly people to notice this and act on it (or for naturally-mono or okay-with-either people to figure out what they are if it ever gets called into question), and mild levels of "calm discussion" are necessary to talk about it openly without people getting indignant at you. Poly and potential poly people have a standard common interest in some rationality skills, but figuring out whether you're poly and acting on it seems to me like a very bounded challenge---like atheism, or making fun of homeopathy, it's not a cognitive challenge around which you could build a lasting path of personal growth.

7Algernoq
I'd like to see more "calm discussion" of status differentials in relationships, because a general solution here would address nearly all concerns about polyamory. Thanks to HPMOR for helping me understand the real world. One recipe for being a player is to go after lower-status (less-attractive) people, fulfill their romantic needs with a mix of planned romance, lies and bravado, have lots of sex, and then give face-saving excuses when abandoning them. This isn't illegal. It's very difficult to prosecute actually giving other people STDs, or coercing them into sex. Merely telling lies to get sex (or, to swap genders and stereotype, get status and excessive support without providing sex) isn't so bad in comparison. I'm indignant at Evolution (not at polyamory, monogamy, men, etc.) because I strongly suspect several of my previous partners were raped, and unable to prosecute it. They sort-of got over it and just didn't tell future partners (me) about it. My evidence for this includes being told stories that sounded like half-truths (a stalker followed me! and I was drugged! and now I have this scar! but nothing happened!) and overly-specific denials (nothing's happened to me that would give me panic attacks!). Another quoted a book about recovering from sexual assault. I haven't actually asked any of them, but I don't want to because this conversation would be massively unpleasant as well as unhelpful. Hypothetically: F: So, yeah. That happened. M: I'm sorry, not your fault, etc... M: So, you know who did it? F: ...yes (in 90% of cases) M: I want to know who so... F: No. I'm not a barbarian. Let's move forward. M: If (when) someone threatens you again, will you threaten them back? F: No. Again, I'm not a barbarian. I'll avoid them socially but that's it, and I'm out of luck if they're not breaking any laws in public. M: In my experience with bullies, they don't care about social punishment. They only care about credible physical or legal threats. They're also gene
9ephion
What concerns do you have, exactly? I've found that the increased fluidity and flexibility inherent to polyamory (vs monogamy, it can't touch singlehood there) are great for reducing the impact and duration for potentially abusive or unhealthy situations, as a) people often have other partners who can help mediate conflicts or alert red flags, b) to isolate a person, the abuser has to go to the additional step of having the person break up with all of their partners. Furthermore, individuals tend toward more satisfying relationships as time goes on as the availability of other relationships tends to either cause less healthy/happy relationships to take less time/attention from the people involved or grow into more healthy/happy relationships. We aren't talking about poly anymore, right? Because this would get a person a terrible reputation in any of the poly circles I know. Or, any social circle I'm a part of at all. Any social scene where this isn't frowned upon isn't the kind of scene I'd like to be a part of.
9Viliam_Bur
This is too interesting topic to be hidden deep inside a comment thread to an article with different topic. Some random thoughts here: I would like to have more data about poly relationships. I don't have an example in my neighborhood, and even if I had, I wouldn't want to build my statistic on a single data point. And even if I found dozen examples now, I could only observe how they are now, not what happens in long term. (If I were 20, I could start poly dating and get data experimentally, but I am almost 40 now, so this doesn't seem like a good area for experimenting.) I would love to see an analysis by someone who is not a polyamory enthusiast, but who has observed many poly relationships in long term, has some statistics, and can compare them with mono statistics. For example, whether divorces in poly community are on average more or less civilized than in mono community. What exactly happens in a relationship, depends not only on its formal structure, but very much on the personalities of people involved. If we use monogamous relationships as a more familiar model, some relationships are awesome, and some relationships are horrible. Generally, marriage is considered more serious than dating, but there is also a variance; some people take their dating more seriously than other people take their marriage. Shorty: mono relationships are different. I expect the same for poly relationships. If a polyamorous group contains some horrible people, I would expect horrible results; but that's not an argument against polyamory. (Perhaps in a statistical sense: a larger group has a greater chance to contain a horrible person. But maybe it is easier for the other people to send the horrible person way: they will not remain alone if they do. But maybe it is more difficult to coordinate more people. Again, not enough data.) Like army1987 said, different environments have different proportions of evil people. If you experienced only one, it seems like the whole world is the
7Algernoq
I wish I could up-vote this whole comment more, and especially this line. I agree with your points and it'd be interesting to see a top-level post about this. You're right; I don't feel like part of a "tribe" now, though I have some good friends/family, and it comes through in my writing. There are a few genuinely nice tribes I could join (by helping/entertaining tribe members to build reciprocity, and signaling belonging with my style choices), and I should prioritize this for sanity's sake. Ideally, I would find a tribe of smart and well-adjusted people who want to try to not die, i.e. try to get rich and then make the needed science happen. There are only a few people interested in this project, though, and they tend to be crazy, making forming such a tribe near-impossible. Joining a tribe that values being a good person and enjoying cultured recreation (and avoiding depressive patterns of thinking about how all conventional roads lead to death) is probably a good way to go. This is a strange game we all are playing, where only the meaningless rules are clearly written.
8Viliam_Bur
(This could be just a hindsight reasoning, but the fact that you wrote this article is an evidence for not having the tribe -- otherwise you probably would have discussed the topic with your tribe, and got to some satisfactory conclusion, instead of asking us to defend ourselves.) Having a tribe that shares at least some of your values is very good for mental health. I used to be in a tribe of smart religious people, with whom I could reasonably debate about many things (at the cost of silently suffering when they tried to apply similar reasoning to some supernatural topic, which fortunately didn't happen too often). I also was in a tribe of people interested in psychology, which later mostly fell apart, but some people stil see each other once in a while. Then I had a few friends to talk about programming, or other specialized topics. Also, when I had a girlfriend, we shared some interests. This was all nice, but there was this... compartmentalization. I knew I can debate a topic X only in a group A, a topic Y only in a group B, and a topic Z nowhere. Sometimes merely because they wouldn't be interested in a topic, but sometimes the topic would go directly against the values of the group. (You can't debate atheism with religious people, or skepticism with people who believe that "positive thinking" is the answer to everything and that reality is only as much real as you believe it to be.) Or perhaps I wanted to put two ideas together, like self-improvement and rationality, but I only knew people interested in self-improvement through irrational means, or in the kind of skepticism that opposed any desire for self-improvement as naiveté. Or I wanted to become more rational, in everything, consistently, as a lifestyle, and people just didn't understand why or how. So I felt like my mind was cut to multiple aspects, some of them acceptable in some groups, some of them acceptable nowhere. And it seemed like the best thing I could realistically have, and that perhaps I
5Algernoq
Exactly! I tend to affiliate with different friends/groups for different reasons. It tends to be easier to find friends for normal, low-risk goals (living well, studying) than for weird, high-risk goals (getting very rich, ending death), and with any friend there tends to be points of disagreement. My understanding is this is a challenge for most adult city-dwellers. One other approach (also implemeted by EY) is to slowly change his friends' beliefs to more closely agree with his own. If the goal is simply acceptance, a successful strategy seems to be to use higher status/value in some areas to make up for a tendency to share "weird" thoughts in other areas. In other words, attract friends who will acknowledge and support the rationalist self-improvement even though they don't take that path themselves, by providing value/fun/leadership in other areas. Good luck with the meetups/tribe!
1MugaSofer
Firstly, I just want to second the point that this is way too interesting for, what, a fifth-level recursion? Secondly: Is this ... a winning strategy? In any real sense? I mean, yes, it's easier to sleep with unattractive people. But you don't want to sleep with unattractive people. That is what "attractiveness" refers to - the quality of people wanting you [as a sexual/romantic partner, by default.] Now, the fact that it then becomes easy for attractive psychopaths to create relationships for nefarious purposes is ... another matter. But I'm confused as to why you see the choices as "player, but unethical" or "non-player, but good". Surely you want to be a "player" who has sex with people you are actually attracted to?
0[anonymous]
double-post
-8Lumifer
5Dentin
There's a lot of biases and cultural norms to overcome in making the transition from mono- to poly-amory. While I've remained monogomas myself, it's purely for time and efficiency reasons, and if I didn't have Stuff To Do, I'd probably go that direction as well.

While I've remained monogomas myself, it's purely for time and efficiency reasons

Worst Valentine's Day card ever.

3Sabiola
Yeah, that's funny. But Dentin does have a point, even if he didn't formulate it very romantically. It takes time and effort to do a relationship justice; and if you don't have that time, it's better to stay monogamous.
1Algernoq
Some of these exist for good reasons. Among other issues, polyamory gives high-status men an excuse to tell low-status men that their feelings of discomfort are "biases and cultural norms to overcome". I'd say it's just like monogamous sex: it's best not to (if you're trying to maximize productivity), but if you're going to do it anyway you might as well do it in a well-thought-out happiness-increasing way.
2[anonymous]
For each their own; I'm not judging. I didn't know that was a common belief here. I can see how it makes sense for certain people's lifestyle choices. I just don't see the connection to rationality.
6TheMajor
I think a part of the reason is that most people would never even consider a polyamorous relationship, whereas it might for quite a lot of people be a better option than the alternatives. If this is true then being in a polyamorous relationship is a strong indicator of actually considering alternatives and embracing the truth when stumbling upon it. Having said all that I think it is not one of the central activities related to LW, the implication mentioned above is valid only so long as people don't make a habit out of trying (radically) different sorts of romance.
4Viliam_Bur
That topic used to be discussed on LW... but now I realize I haven't heard about it much recently.

I feel like the more important question is: How specifically has LW succeeded to make this kind of impression on you? I mean, are we so bad at communicating our ideas? Because many things you wrote here seem to me like quite the opposite of LW. But there is a chance that we really are communicating things poorly, and somehow this is an impression people can get. So I am not really concerned about the things you wrote, but rather about a fact that someone could get this impression. Because...

Rationality doesn't guarantee correctness.

Which is why this site is called "Less Wrong" in the first place. (Instead of e.g. "Absolutely Correct".) On many places in Sequences it is written that unlike the hypothetical perfect Bayesian reasoner, human are pretty lousy at processing available evidence, even when we try.

deciding what to do in the real world requires non-rational value judgments

Indeed, this is why a rational paperclip maximizer would create as many paperclips as possible. (The difference between irrational and rational paperclip maximizers is that the latter has a better model of the world, and thus probably succeeds to create more paperclips on average.... (read more)

WTF?! Please provide an evidence of LW encouraging PhD students at top-10 universities to drop out of their PhD program to go to LW "training camps" (which by the way don't take a few months).

When I visited MIRI one of the first conversations I had with someone was them trying to convince me not to pursue a PhD. Although I don't know anything about the training camp part (well, I've certainly been repeatedly encouraged to go to a CFAR camp, but that is only a weekend and given that I teach for SPARC it seems like a legitimate request).

Convincing someone not to pursue a PhD is rather different than convincing someone to drop out of a top-10 PhD program to attend LW training camps. The latter does indeed merit the response WTF.

Also, there are lots of people, many of them graduate students and PhD's themselves, who will try to convince you not to do a PhD. Its not an unusual position.

I mean, are we so bad at communicating our ideas?

I find this presumption (that the most likely cause for disagreement is that someone misunderstood you) to be somewhat abrasive, and certainly unproductive (sorry for picking on you in particular, my intent is to criticize a general attitude that I've seen across the rationalist community and this thread seems like an appropriate place). You should consider the possibility that Algernoq has a relatively good understanding of this community and that his criticisms are fundamentally valid or at least partially valid. Surely that is the stance that offers greater opportunity for learning, at the very least.

I certainly considered that possibility and then rejected it. (If there are more 2 regular commenters here who think that rationality guarantees correctness and will solve all of their lives problems, I will buy a hat and then eat it).

4ThisSpaceAvailable
Whether rationality guarantees correctness depends on how one defines "rationality" and "correctness". Perfect rationality, by most definitions, would guarantee correctness of process. But one aspect of humans' irrationality is that they tend to focus on results, and think of something as "wrong" simply because a different strategy would have been superior in a particular case.
2Luke_A_Somers
When you believe ~A and someone says 'You believe A', what else is there? From most generous to least: * I misspoke, or I misunderstood your saying something else as saying I believe A. * You misheard me, or misspoke when saying that I believe A. * You're arguing in bad faith Note that 'I actually secretly believe A' is not on the list, so it seems to me that Villiam was being as generous as possible.

I have come across serious criticism of the PhD programs at major universities, here on LW (and on OB). This is not quite the same as a recommendation to not enroll for a PhD, and it most certainly is not the same as a recommendation to quit from an ongoing PhD track, but I definitely interpreted such criticism as advice against taking such a PhD. Then again I have also heard similar criticism from other sources, so it might well be a genuine problem with some PhD tracks.

For what it's worth my personal experiences with the list of main points (not sure if this should be a separate post, but I think it is worth mentioning):

Rationality doesn't guarantee correctness.

Indeed, but as Villiam_Bur mentions this is way too high a standard. I personally notice that while not always correct I am certainly correct more often thanks to the ideas and knowledge I found at LW!

In particular, AI risk is overstated

I am not sure but I was under the impression that your suggestion of 'just building some AI, it doesn't have to be perfect right away' is the thought that researchers got stuck on last century (the problem being that even making a dumb prototype was insanely complicated), when peopl... (read more)

8David_Gerard
I have been contemplating this point. One of the things that sets off red flags for people outside a group is when people in the group appear to have cut'n'pasted the leader's opinions into their heads. And that's definitely something that happens around LW. Note that this does not require malice or even intent on the part of said leader! It's something happening in the heads of the recipients. But the leader needs to be aware of it - it's part of the cult attractor, selecting for people looking for stuff to cut'n'paste into their heads. I know this one because the loved one is pursuing ordination in the Church of England ... and basically has this superpower: convincing people of pretty much anything. To the point where they'll walk out saying "You know, black really is white, when you really think about it ..." then assume that that is their own conclusion that they came to themselves, when it's really obvious they cut'n'pasted it in. (These are people of normal intelligence, being a bit too easily convinced by a skilled and sincere arguer ... but loved one does pretty well on the smart ones too.) As I said to them, "The only reason you're not L. Ron Hubbard is that you don't want to be. You'd better hope that's enough." Edit: The tell is not just cut'n'pasting the substance of the opinions, but the word-for-word phrasing.

I have been contemplating this point. One of the things that sets off red flags for people outside a group is when people in the group appear to have cut'n'pasted the leader's opinions into their heads. And that's definitely something that happens around LW.

The failure mode might be that it's not obvious that an autodidact who spent a decade absorbing relevant academic literature will have a very different expressive range than another autodidact who spent a couple months reading the writings of the first autodidact. It's not hard to get into the social slot of a clever outsider because the threshold for cleverness for outsiders isn't very high.

The business of getting a real PhD is pretty good at making it clear to most people that becoming an expert takes dedication and work. Internet forums have no formal accreditation, so there's no easy way to distinguish between "could probably write a passable freshman term paper" knowledgeable and "could take some months off and write a solid PhD thesis" knowledgeable, and it's too easy for people in the first category to be unaware how far they are from the second category.

3FeepingCreature
I don't know. On the one hand side, that's how you would expect it to look if the leader is right. On the other hand, "cult leader is right" is also how I would expect it to feel if cult leader was merely persuasive. On the third hand side, I don't feel like I absorbed lots of novel things from cult leader, but mostly concretified notions and better terms for ideas I'd held already, and I remember many Sequences posts having a critical comment at the top. A further good sign is that the Sequences are mostly retellings of existing literature. It doesn't really match the "crazy ideas held for ingroup status" profile of cultishness.
3David_Gerard
The cut'n'paste not merely of the opinions, but of the phrasing is the tell that this is undigested. Possibly this could be explained by complete correctness with literary brilliance, but we're talking about one-draft daily blog posts here.
-1FeepingCreature
I feel like charitably, another explanation would just be that it's simply a better phrasing than people come up with on their own. So? Fast doesn't imply bad. Quite the opposite, fast-work-with-short-feedback-cycle is one of the best ways to get really good.
1dxu
This (to me) reads like you're implying intentionality on the part of the writers to target "a very susceptible audience". I submit the alternative hypothesis that most people who make posts here tend to be of a certain personality type (like you, I'm looking for a better term than "personality type" but failing to find anything), and as a result, they write stuff that naturally attracts people with similar personality types. Maybe I'm misreading you, but I think it's a much more charitable interpretation than "LW is intentionally targeting psychologically vulnerable people". As a single data point, for instance, I don't see myself as a particularly insecure or unstable person, and I'd say I'm largely here because much of what EY (and others on LW) wrote makes sense to me, not because it makes me feel good or fuels my ego. With respect, I'd say this is most likely an impossible endeavor. Anyone who wants to try is welcome to, of course, but I'm just not seeing someone who can't grok fractions being able to comprehend more than 5% of the Sequences.
5Algernoq
Not generally -- I keep coming back for the clear, on-topic, well-reasoned, non-flame discussion. Many (I guess 40-70%) of meetups and discussion topics are focused on pursuing rational decision-making for self-improvement. Honestly I feel guilty about not doing more work and I assume other readers are here not because it's optimal but because it's fun. There's also a sentiment that being more Rational would fix problems. Often, it's a lack of information, not a lack of reasoning, that's causing the problem. I agree, and I agree LW is frequently useful. I would like to see more reference of non-technical experts for non-technical topics. As an extreme example, I'm thinking of a forum post where some (presumably young) poster asked for a Bayesian estimate on whether a "girl still liked him" based on her not calling, upvoted answers containing Bayes' Theorem and percentage numbers, and downvoted my answer telling him he didn't provide enough information. More generally, I think there can be a similar problem to that in some Christian literature where people will take "(X) Advice" because they are part of the (X) community even though the advice is not the best available advice. Essentially, I think the LW norms should encourage people to learn proven technical skills relevant to their chosen field, and should acknowledge that it's only advisable to think about Rationality all day if that's what you enjoy for its own sake. I'm not sure to what extent you already agree with this. A few LW efforts appear to me to be sub-optimal and possibly harmful to those pursuing them, but this isn't the place for that argument. Not answering this question is limiting the spread of LW, because it's easy to dismiss people as not sufficiently intellectual when they don't join the group. I don't know the answer here. A movement aiming to remove errors in thinking is claiming a high standard for being right. The PhD student dropping out of a top-10 school to try to do a startup af

The PhD student dropping out of a top-10 school to try to do a startup after attending a month-long LW event I heard secondhand from a friend. I will edit my post to avoid spreading rumors, but I trust the source.

If it did happen, then I want to know that it happened. It's just that this is the first time I even heard about a month-long LW event. (Which may be an information about my ignorance -- EDIT: it was, indeed --, since till yesterday I didn't even know SPARC takes two weeks, so I thought one week was a maximum for an LW event.)

I heard a lot of "quit the school, see how successful and rich Zuckerberg is" advice, but it was all from non-LW sources.

I can imagine people at some LW meetup giving this kind of advice, since there is nothing preventing people with opinions of this kind to visit LW meetups and give advice. It just seems unlikely, and it certainly is not the LW "crowd wisdom".

Here's the program he went to, which did happen exactly once. It was a precursor to the much shorter CFAR workshops: http://lesswrong.com/lw/4wm/rationality_boot_camp/

That said, as his friend I think the situation is a lot less sinister than it's been made out to sound here. He didn't quit to go to the program, he quit a year or so afterwards to found a startup. He wasn't all that excited about his PHD program and he was really excited about startups, so he quit and founded a startup with some friends.

0Viliam_Bur
Thanks! Now I remember I heard about that in the past, but I forgot completely. It actually took ten weeks!
4TheMajor
Embracing the conclusion implied by new information even if it is in disagreement with your initial guess is a vital skill that many people do not have. I was first introduced to this problem here on LW. Of course your claim might still be valid, but I'd like to point out that some members (me) wouldn't have been able to take your advice if it wasn't for the material here on LW. The problem with this example is really interesting - there exists some (subjectively objective) probabily, which we can find with Bayesian reasoning. Your recommendation is meta-advice, rather than attempting to find this probability you suggest investing some time and effort to get more evidence. I don't see why this would deserve downvotes (rather I would upvote it, I think), but note that a response containing percentages and Bayes' Theorem is an answer to the question.
3ChristianKl
Saying you didn't provide enough information for a probability estimate deserves downvotes because it misses the point. You can give probability estimates based on any information that's presented. The probability estimate will be better with more information but it's still possible to do an estimate with low information.
0Luke_A_Somers
Using a Value of Information calculation would be best, especially if tied to proposed experiments.
-1ChristianKl
At the same time you seem to criticise LW for being self help and for approaching rationality in an intellectual way that doesn't maximize life outcomes. I do think plenty of people on LW do care about rationality in an intellectual way and do care for developing the idea of rationality and for questions such as what happens when we apply Bayes theorem for situations where it usually isn't applied. In the case of deciding whether "a girl still likes a guy" a practical answer focused on the situation would probably encourage the guy to ask the girl out. As you describe the situation nobody actually gave the advice the calculating probabilities is a highly useful way to deal with the issue. However that doesn't mean that the question of applying Bayes theorem to the situation is worthless. You might learn something about practical application of Bayes theorem. You also get probability numbers that you could use to calibrate yourself. Do you argue that calibrating your prediction for high stakes emotional situations isn't a skill worth exploring because we live in a world where nearly nobody is good at making calibrated predictions in high stakes emotional situations, because there nobody that actually good at it? At LW we try to do something new. The fact that new ideas often fail doesn't imply that we shouldn't experiment with new ideas. If you aren't curious about exploring new ideas and only want practical advice, LW might not be the place for you. The simple aspect of feeling agentship in the face of uncertainty also shouldn't be underrated. Are you arguing that there aren't cases where a PhD student has a great idea for a startup and shouldn't put that idea into practice and leave his PhD? Especially when he might have got the connection to secure the necessary venture capital? I don't know about month-long LW events expect maybe internships with an LW affiliated organisation. Doing internships in general can bring people to do something they wouldn't hav
2Algernoq
No, I agree it's generally a worthwhile skill. I objected to the generalization from insufficient evidence, when additional evidence was readily available. I guess what's really bothering me here is that less-secure or less-wise people can be taken advantage of by confident-sounding higher-status people. I suppose this is no more true in LW than in the world at large. I respect trying new things. Hooray, agency! This is a question I hope to answer. I'm arguing that it was the wrong move in this case, and hurt him and others. In general, most startups fail, ideas are worthless compared to execution, and capital is available to good teams.
4kbaxter
By what metric was his decision wrong? If he's trying to maximize expected total wages over his career, staying in academia isn't a good way to do that. Although he'd probably be better off at a larger, more established company than at a startup. If he's trying to maximize his career satisfaction, and he wasn't happy in academia but was excited about startups, he made a good decision. And I think that was the case here. Some other confounding factors about his situation at the time: * He'd just been accepted to YCombinator, which is a guarantee of mentoring and venture capital * Since he already had funding, it's not like he was dumping his life savings into a startup expecting a return * He has an open invitation to come back to his PHD program whenever he wants If you still really want to blame someone for his decision, I think Paul Graham had a much bigger impact on him than anyone associated with LessWrong did.
7Algernoq
YC funding is totally worth going after! He made the right choice given that info. That's what I get for passing on rumors.
0ChristianKl
It's an online discussion. There a bunch of information that might not be shared because it's too private to be shared online. I certainly wouldn't share all information about a romantic interaction on LW. But I might share enough information to ask an interesting question. I do consider this case to be an interesting question. I like it when people discuss abstract principles like rational decision making via Bayes theorem based on practical real life example instead of only taking far out thought experiments. If I'm understanding you right, you don't even know the individual in question. People drop out of Phd programs all the time. I don't think you can say whether or not they have good reasons for doing so without investigating the case on an individual basis.
-2ThisSpaceAvailable
I'd just like to point out that ranking is a function of both the school and the metric, and thus the phrase "top-10 school" is not really well-formed. While it does convey significant information, it implies undue precision, and allowing people sneak in unstated metrics is problematic.
0TheAncientGeek
But here's the training in refining your values?
-2hairyfigment
And when I think of 'LW failure modes', I imagine someone acting without further analysis. For example, let's say a member of the general population calls people with different political views irrational, and opines that they would raise the quality of some website by leaving. If that person followed through by stalking them and downvoting (manually?) all their past comments, I would conclude he had a mental illness.
4ChristianKl
Plenty of US liberals consider people who voted for Bush irrational and wouldn't want them to be part of the political discourse. The same goes in the other direction. Welcome to the internet. There are plenty of people who misbehave in online forums. Most online forums are simply not very public about members who they ban and whose posts they delete. I don't think stalking is a good word for the documented behavior in this case as all actions happened on this website. There are people who actually do get stalked for things they write online and who do get real life problems from the stalking.
-2hairyfigment
Sure, OK. You don't say. My point is that many would verbally agree with such claims, but very few become Dennis Markuze.
6ChristianKl
As far as I know nobody in this community did become Dennis Markuze. I don't have the feeling that LW is over the internet base rate. Given how little LW is moderated it's an extremely civil place.

LW has a cult-like social structure. ...

Where the evidence for this is:

Appealing to people based on shared interests and values. Sharing specialized knowledge and associated jargon. Exhibiting a preference for like minded people. More likely to appeal to people actively looking to expand their social circle.

Seems a rather gigantic net to cast for "cults".

[-]Cyan180

Well, there's this:

However, involvement in LW pulls people away from non-LWers.

But that is similarly gigantic -- on this front, in my experience LW isn't any worse than, say, joining a martial arts club. The hallmark of cultishness is that membership is contingent on actively cutting off contact with non-cult members.

5Algernoq
Compared to a martial arts club, LW goals are typically more all-consuming. Martial arts is occasionally also about living well, while LW encourages optimizing all aspects of life.
5Cyan
Sure, that's a distinction, but to the extent that one's goals include making/maintaining social connections with people without regard to their involvement in LW so as to be happy and healthy, it's a distinction that cuts against the idea that "involvement in LW pulls people away from non-LWers". This falls under the utility function is not up for grabs. It finds concrete expression in the goal factoring technique as developed by CFAR, which is designed to avoid failure modes like, e.g., cutting out the non-LWers one cares about due to some misguided notion that that's what "rationality" requires.
[-]dthunt340

Art of Rationality" is an oxymoron. Art follows (subjective) aesthetic principles; rationality follows (objective) evidence.

Art in the other sense of the word. Think more along the lines of skills and practices.

I think "art" here is mainly intended to call attention to the fact that practical rationality's not a collection of facts or techniques but something that has to be drilled in through deliberate long-term practice: otherwise we'd end up with a lot of people that can quote the definitions of every cognitive bias in the literature and some we invented, but can't actually recognize when they show up in their lives. (YMMV on whether or not we've succeeded in that respect.)

Some of the early posts during the Overcoming Bias era talk about rationality using a martial arts metaphor. There's an old saying in that field that the art is 80% conditioning and 20% technique; I think something similar applies here. Or at least should.

(As an aside, I think most people who aren't artists -- martial or otherwise -- greatly overstate the role of talent and aesthetic invention in them, and greatly underestimate the role of practice. Even things like painting aren't anywhere close to pure aesthetics.)

0[anonymous]
Wang Yangming may be relevant here.

Ahh, that makes more sense.

Would it be fair to characterize most of your complaints as roughly "Less Wrong focuses too much on truth seeking and too little on instrumental rationality - actually achieving material success"?

I agree with that.

In that case, I'm afraid your goals and the goals of many people here may simply be different. The common definition of rationality here is "systematic winning". However, this definition is very fuzzy because winning is goal dependent. Whether you are "winning" is dependent on what your goals and values are.

Can't speak for anyone else, but the reason why I am here is because I like polite but vigorous discussion. Its nice to be able to discuss topics with people on the internet in a way that does not drive me crazy. People here are usually open to new ideas, respectful, yet also uncompromising in the force of their arguments. Such an environment is much more helpful to me in learning about the world than the adversarial nature of most forum discussions. My goal in reading LessWrong is mostly finding likeminded people who I can talk to, share ideas with, learn form and disagree with, all without any bad feelings. That is a rare thing.

If your goal is achieving material success there are certainly very general tools and skills you can learn like getting over procrastination, managing your emotional state, or changing your value system to achieve your goals. CFAR... (read more)

9iarwain1
See this post and Anna Salamon's (partial) response.
9Will_BC
Those posts are 4 years old and 2 years older than CFAR. I do think that LW could and should do better with instrumental rationality.

Note that opinions differ on this topic, e.g. someone recently referred to LW as a "signaling and self-help cesspit" and got upvoted. Personally, I like seeing self-help stuff and I would encourage you to be the change you want to see :)

5Algernoq
I saw a LOT of this, as well as some good/useful stuff.
3Will_BC
It's in the works. I've got a few ideas, but right now I'm running them by family and friends. I have some ambitious goals but I'll probably start small. I would like to see some big changes happen in the world, and I don't think that working in the most straightforward way towards the Singularity is the only way to bring them about.
1beth
When you're ready to share these ideas, please let me know how I can help.
0Will_BC
Thank you very much for the offer. I should have a post up in over a week and under a month.
2Algernoq
Interesting links. Looks like this discussion has happened before.

[I]nvolvement in LW pulls people away from non-LWers. One way this happens is by encouraging contempt for less-rational Normals. [...] LW recruiting (hpmor, meetup locations near major universities) appears to target socially awkward intellectuals (incl. me) who are eager for new friends and a "high-status" organization to be part of, and who may not have many existing social ties locally.

I think you've got the causation going the wrong way here. LW does target a lot of socially awkward intellectuals. And a lot of LWers do harbor some contempt for their "less rational" peers. I submit, however, that this is not because they're LWers but rather because they're socially awkward intellectuals.

American geek culture has a strong exclusionist streak: "where were you when I was getting beaten up in high school?" Your average geek sees himself (using male pronouns here because I'm more familiar with the male side of the culture) as smarter and morally purer than Joe and Jane Sixpack -- who by comparison are cast as lunkish, thoughtless, cruel, but attractive and socially successful -- and as having suffered for that, which in turn justifies treating th... (read more)

involvement in LW pulls people away from non-LWers. One way this happens is by encouraging contempt for less-rational Normals.

Alternative hypothesis: Once a certain kind of person realizes that something like the LW community is possible and even available, they will gravitate towards it - not because LW is cultish, but because the people, social norms, and ideas appeal to them, and once that kind of interaction is available, it's a preferred substitute for some previously engaged-in interaction. From the outside, this may look like contempt for Normals. But from personal experience, I can say that form the inside it feels like you've been eating gruel all your life, and that's what you were used to, but then you discovered actual delicious food and don't need to eat gruel anymore.

Yes, it's rather odd to call a group of like minded people a cult because they enjoy and prefer each other's company.

In grad school I used to be in a couple of email lists that I enjoyed because of the quality of the intellectual interaction and the topics discussed, one being Extropians in the 90s. I'd given that stuff up for a long time.

Got back into it a little a few years ago. I had been spending time at a forum or two, but was getting bored with them primarily because of the low quality of discussion. I don't know how I happened on HPMOR, but I loved it, and so naturally came to the site to take a look. Seeing Jaynes, Pearl, and The Map is not the Territory served as good signaling to me of some intellectual taste around here.

I didn't come here and get indoctrinated - I saw evidence of good intellectual taste and that gave me the motivation to give LW a serious look.

This is one suggestion I'd have for recruiting. Play up canonical authors more. Jaynes, Kahneman, and Pearl convey so much more information than bayesian analysis, cognitive biases, and causal analysis. None of those guys are the be all and end all of their respective fields, but identifying them plants a flag where we see value that can attract similarly minded people.

I've debated myself about writing a detailed reply, since I don't want to come across as some brainwashed LW fanboi. Then I realized this was a stupid reason for not making a post. Just to clarify where I'm coming from.

I'm in more-or-less the same position as you are. The main difference being that I've read pretty much all of the Sequences (and am slowly rereading them) and I haven't signed up for cryonics. Maybe those even out. I think we can say that our positions on the LW - Non-LW scale are pretty similar.

And yet my experience has been almost completely opposite of yours. I don't like the point-by-point response on this sort of thing, but to properly respond and lay out my experiences, I'm going to have to do it.

Rationality doesn't guarantee correctness.

I'm not going to spend much time on this one, seeing as how pretty much everyone else commented on this part of your post.

Some short points, though:

Given some data, rational thinking can get to the facts accurately, i.e. say what "is". But, deciding what to do in the real world requires non-rational value judgments to make any "should" statements. This is in a part of the Sequences you've probably hav

... (read more)

Thanks for the detailed reply!

Based on this feedback, I think my criticisms reflect mostly on my fit with the LWers I happened to meet, and on my unreasonably high standards for a largely informal group.

Upvoted for updating.

7TheAncientGeek
One could reasonably expect significantly less.

Hi Algernoq,

Thanks for writing this. This sentence particularly resonated:

LW members who are conventionally successful (e.g. PhD students at top-10 universities) typically became so before learning about LW, and the LW community may or may not support their continued success (e.g. may encourage them, with only genuine positive intent, to spend a lot of time studying Rationality instead of more specific skills).

I was definitely explicitly discouraged from pursuing a PhD by certain rationalists and I think listening to their advice would have been one of the biggest mistakes of my life. Unfortunately I see this attitude continuing to be propagated so I am glad that you are speaking out against it.

EDIT: Although, it looks like you've changed my favorite part! The text that I quoted the above was not the original text (which talked more about dropping out of PhD and starting a start-up).

5Bruno_Coelho
This anti-academic feeling is something I associate with lesswrong, mostly because people can find programming jobs without necessarily having a degree.
5Algernoq
Glad to hear it! For others considering a PhD: usually the best (funded) PhD program you got into is a good choice for you. But only do it if you enjoy research/learning for its own sake.
4jsteinhardt
Tangential, but: I'm not sure I agree with this, except insofar as any top-tier or even second-tier program will pay for your graduate education, at least in engineering fields, and so if they do not then that is a major red flag. I would say that research fit with your advisor, caliber of peers, etc. is much more important.
[-]tslarm110

I interpreted "the best (funded) PhD program you got into" to mean 'the best PhD program that offered you a funded place', rather than 'the best-funded PhD program that offered you a place'. So Algernoq's advice need not conflict with yours, unless he did mean 'best' in a very narrow sense.

1Algernoq
OK, I'll change it back. I heard it secondhand so I deleted it.

Thanks for being bold enough to share your dissenting views. I'm voting you up just for that, given the reasoning I outline here.

I think you are good job detaching the ideas of LW that you think are valuable and adopting them and ditching the others. Kudos. Overall, I'm not sure about the usefulness of debating the goodness or badness of "LW" as a single construct. It seems more useful to discuss specific ideas and make specific criticisms. For example, I think lukeprog offered a good specific criticism of LW thinking/social norms here. In general, if people take the time to really think clearly and articulate their criticisms, I consider that extremely valuable. On the opposite end of the spectrum, if someone says something like "LW seems weird, and weird things make me feel uncomfortable" that is not as valuable.

I'll offer a specific criticism: I think we should de-emphasize the sequences in the LW introductory material (FAQ, homepage, about page). (Yes, I was the one who wrote most of the LW introductory material, but I was trying to capture the consensus of LW at the time I wrote it, and I don't want to change it without the change being a consensus ... (read more)

4John_Maxwell
[pollid:737] Note: Here's Yvain on why the sequences are great, to provide some counterpoint to my criticism above.

Where available, I would emphasize the original source material over the sequence rehash of them.

This would greatly lower the Phyg Phactor, limit in group jargon, better signal to outsiders who also value that source material, and possibly create ties to other existing communities.

Where available, I would emphasize the original source material over the sequence rehash of them.

Needed: LW wiki translations of LW jargon into the proper term in philosophy. (Probably on the existing jargon page.)

I strongly disagree with this. I don't care about cult factor: The sequences are vastly more readable than the original sources. Almost every time I've tried to read stuff a sequence post is based on I've found it boring and given up. The original sources already exist and aren't attracting communities of new leaders who want to talk about and do stuff based on them! We don't need to add to that niche. We are in a different niche.

2blacktrance
Seconded. I think HPMOR and the Sequences are a better introduction to rationality than the primary texts would be.
2buybuydandavis
I didn't. I've read them all. Don't know how someone finds Jaynes "boring", but different strokes, etc. Phyg +1 Jaynes, Pearl, Hahneman, and Korzybski had followings long before LW and the sequences existed. Korzybski's Institute for General Semantics has been around since 1938, and was fairly influential, intellectually and culturally. They actually have some pretty good summary material, if reading Korzybski isn't your thing (and I can understand that one, as he was a tiresome windbag). If you like the sequences, great, read them. I think you're missing out on a lot if you don't read the originals. Simply as an outreach method, listing the various influences would pique more interest than "We've got a smart guy here who wrote a lot of articles! Come read them!" The sequences aren't the primary outreach advantage here - HPMOR is. Much like Rand's novels are for her.
7drethelin
My outreach method is usually not to do that but to link to a specific article about whatever we happened to be talking about which is a lot faster than saying "Here read a textbook on probability" or "look at this tversky and kahneman study!" Then again I don't do a ton of LW outreach
6John_Maxwell
We could direct people to Wikipedia's list of cognitive biases (putting effort in to improving the articles as appropriate and getting a few people to add the articles to their Wikipedia watchlists). Improving Wikipedia articles has the positive externality of helping anyone who reads the article (of which the LW-curious will make up a relatively small fraction). I think the ideal way to present rationality might be a diagnostic test that lets you know where your rationality weaknesses are and how to improve them, but I'm not sure if this is doable/practical.
[-]kalium120

I read LW for entertainment, and I've gotten some useful phrases and heuristics from it, but the culture bothers me (more what I've seen from LWers in person than on the site). I avoid "rationalists" in meatspace because there's pressure to justify my preferences in terms of a higher-level explicit utility function before they can be considered valid. People of similar intelligence who don't consider themselves rationalists are much nicer when you tell them "I'm not sure why, but I don't feel like doing xyz right now." (To be fair, my sample is not large. And I hope it stays that way.)

[-]philh100

FWIW, I have the opposite experience with online versus offline.

I avoid "rationalists" in meatspace because there's pressure to justify my preferences in terms of a higher-level explicit utility function before they can be considered valid.

It wouldn't surprise me at all to see this on the website, but I wouldn't expect it to happen in meatspace.

(Obviously meetups vary, but I help organize the London meetup, I went to the European megameetup, I went to CFAR, and I've spent a small amount of time with the SF/Berkeley crowd.)

8mare-of-night
I once had a friend who got really worried when I invited him to come to a LW meetup with me, and later found out he had another friend who'd read this site and then decided that everyone else needed to be more rational to make her own life easier. The worst I've encountered in meatspace personally was being asked why I believe what I believe a whole lot (which can be really useful when you're actually deciding something, but being asked to cite your sources in conversation also really interrupts the flow of things), which was more than balanced out by the good conversations. So my general impression is that LW as a high standard deviation in acquaintance/conversationalist quality, and either there's more good than bad or I've had good luck.
4Algernoq
I experienced this too, though I claimed an explicit utility function (making self-replicating robots) that no one was prepared to argue with, so I didn't get anyone telling me my feelings were irrational and should be ignored. I also noticed some slow decision-making. Recommendation: in a large group, use a heuristic that takes less than 10 minutes of discussion to decide where/when to go for dinner.
0NancyLebovitz
Any suggestions for a better heuristic?
4Toggle
The 'veto method' has worked quite well for me, although I haven't tested it for groups larger than about ten. Assuming that the group has reached a consensus on eating, any member of the group is free to suggest a restaurant. After the location is suggested, any member of the group can veto that suggestion, but in exchange the vetoing member is required to suggest a different restaurant. Repeat until a suggestion is made that no member of the group vetoes.
1NancyLebovitz
I'm not sure that any of those would take less than 10 minutes for a large group. Also, it gets tougher if any in the group have serious dietary or financial constraints.
0Algernoq
Sure: 1. take a straw poll to see who wants to go get dinner at time X 2. if "enough" people want to go, they then pick a restaurant... 3. anyone can make a pitch for one new restaurant that the group should check out 4. in a group of n, one person suggests n*2/3 possible restaurants to eat dinner at (max. 7) 5. everyone else, one at a time, may then either pass or name 2/3 of the restaurants named by the person immediately before them 6. if reservations are required, calls to the restaurants are made when 2 possibilities remain 7. when only one restaurant is named, the group goes there. This algorithm is a work in progress.
0Prismattic
Not a heuristic, but I would suggest an auction. Example: You have 5 people, A and B want seafood, C wants Thai, D wants Mexican, and E wants steak. E -- I'll pay for 1% of everyone else's bill if we get steak. A -- 2%, seafood, C -- 3%, Thai, B -- 4% seafood (all pass) Result, A + B get the food they want, but C, D, and E pay less (with B picking up 2.67% of their bills and A picking up 1.33%). There are edge cases where this doesn't necessarily work well (e.g. someone with a severe food allergy gets stuck bidding a large amount to avoid getting poisoned), but overall I think it functions somewhat similarly to yootling.
1ChristianKl
Which LW meetup did you visit? The LWler I meet in Berlin and at the European mega meetup were generally nice and didn't pressure other people into giving justifications for preferences. There might be some curious questions when someone doesn't understand why someone else is doing what they are doing, but I didn't witness anything I would label as pressuring even towards people running around with crookers rules tags.
2kalium
Not an actual meetup, but some people I knew from college who happened to be LWers/rationalists.

This is going to sound like a stupid excuse... okay, instead of the originally planned excuse, let me just give you an example of what happened to me a week or two ago...

I wrote an introductory article about LW-style rationality in Slovak language on a website where it quickly got 5000 visitors. (link) About 30 of them wrote something in a discussion below the article, some of them sent me private messages about how they like what I wrote, and some of them "friended" me on Facebook.

The article was mostly about that reality exists and map is not the territory, and how politics is the mindkiller. With specific examples about how the politics is the mindkiller, and mentioning the research about how political opinions reduced subjects' math abilities.

One guy who "friended" me because of this article... when I looked at his page, it was full of political conspiracy theories. He published a link to some political conspiracy theory article every few hours. (Judging from the context, he meant it seriously.) When I had him briefly in the friend list (because I clicked "okay" without checking his page first), my Facebook homepage turned mostly to a list of consp... (read more)

3kalium
I don't think all or even most LWers exhibit the behavior I'm complaining about. But I think people who do it are attracted to LW/rationalism, and it bothers me enough that I'm willing to give up completely on a community after I've seen it from a few people there.
0ChristianKl
I think "true" LW members are people who go to meetups or who participate on LW by writing comments.
0ChristianKl
There are plenty of people who call themselves rationalists who have no relationship to LW. Give than you have 636 karma, you might even have a stronger bond to LW then them.
3kalium
That is probably true, though at least one of the people in question now attends meetups in my area. In fact I even got a job through LW. On the other hand I don't feel like I use this site socially. I don't have conversations with other users, or remember which users have been friendly or hostile to me. I never even met any of my employers. I just get an urge to nitpick, or shout into the void, or point out facts, or read interesting articles, and so I come here.

So, specifically with respect to "cult' and "elitist" observations I see, in general, I would like to offer a single observation:

"Tsuyoku naritai" isn't the motto of someone trying to conform to some sort of weird group norm. It's not the motto of someone who hates people who have put in less time or effort than himself. It's the recognition that it is possible to improve, and the estimation that improving is a worthwhile investment.

If your motivation for putting intellectual horsepower into this site isn't that, I'd love to hear ... (read more)

0Algernoq
Yup, I'm all about continuous improvement, or at least try to be.

Your criticism of rationality for not guaranteeing correctness is unfair because nothing can do that. Your criticism that rationality still requires action is equivalent to saying that a driver's license does not replace driving, though many less wrongers do overvalue rationality so I guess I agree with that bit. You do however seem to make a big mistake in buying into the whole fact- value dichotomy, which is a fallacy since at the fundamental level only objective reality exists. Everything is objectively true or false, and the fact that rationality canno... (read more)

-1Algernoq
I agree. My concern is that LW claims to be "less wrong" than it is. A third possibility is "undecidable" (as in Godel incompleteness). There's something weird going on with consciousness that may resolve this question once understood.
0Sophronius
I don't really understand your objection. When I say that everything is objectively true or false, I mean that any particular thing is either part of the universe/reality at a given point in time/space or it isn't. I don't see any other possibility*. Perhaps you are confusing the map and the territory? It is perfectly possible to answer questions with "I don't know" or "mu" but that doesn't mean that the universe itself is in principle unknowable. The fact that consciousness is not properly understood yet does not mean that it occupies a special state of existing/not existing: We are the one's that are confused, not the universe. *Ok, my brain just came up with another possibility but it's irrelevant to the point I'm making.
2Algernoq
I think we are in agreement that rational decision-making is usually valuable, and that some people sometimes cite rationality in order to give false weight to their opinions. To continue your analogy, I'm saying that studying the rules of the road ceases to be a good use of time for most people once a basic driver's license is earned, even if it can slightly reduce accident risk. The possibility of upvotes while having this discussion is making me reconsider. The universe could be fundamentally unknowable, though this possibility doesn't seem very useful.

I feel like everyone in this community has ridiculous standards for what the community should look like in order to be considered a success. Considering the demographics Less Wrong pulls from, I consider LW to be the experimental group where r/atheism is the control group.

0SanguineEmpiricist
Agreed.
[-][anonymous]80

I basically agree with this post, with some exceptions like:

My strong suspicion is that the best way to reduce existential risk is to build (non-nanotech) self-replicating robots using existing technology and online ordering of materials

But for the moment I will keep reading LessWrong sometimes. This is because of useful guides like "Lifestyle interventions to increase longevity" and "Political Skills which Increase Income" and also that the advice I've gotten has often been better than on Quora. And I do like the high quality evidence-based discussion of charitable/social interventions.

Rationality doesn't guarantee correctness

That's a strawman. I don't think a majority of LW thinks that's true.

In particular, AI risk is overstated

The LW consensus on the matter of AI risk isn't that it's the biggest X-risk. If you look at the census you will find that different community members think different X-risks are the biggest and more people fear bioengineered pandemics than an UFAI event.

LW community may or may not support their continued success (e.g. may encourage them, with only genuine positive intent, to drop out of their PhD progr

... (read more)
[-][anonymous]70

Yeah, this is all true. In any helpful community, there will be some drawbacks and red flags. The question is always if engaging in the community is the highest expected value you can get. For most people, I think the answer is obviously no.

Less wrong should really be viewed as an amusing diversion, which can be useful in certain situations (this weekend I did calibration training, would have been hard to find people who wanted to join without LW). I think people for the most part aren't on here because they think this is the absolute best use of their time, or that it's a perfect community that has no drawbacks or flaws.

To be fair, most online communities aren't an especially good use of your time if you're an ambitious, driven person.

0[anonymous]
Was it something I said? This seems to have a surprising number of downvotes.

Replacing my original comment with this question:

What has Lesswrong done for you?

We talk about strengthening the community, etc. But what does LW actually do? What do LWers get out of it? What about value Vs. time spent with LW? Ex, if you got here in 2011, was most of the value concentrated in 2011? Has it trickled out over time?

Do we accomplish things? Are we some kinda networking platform for pockets of smart people spread out across the globe? Do we contribute to the world in any way other than encouraging people to donate money responsibly?

This is not... (read more)

Do we contribute to the world in any way other than encouraging people to donate money responsibly?

You say that like it isn't a big contribution.

Do we accomplish things? Are we some kinda networking platform for pockets of smart people spread out across the globe? Do we contribute to the world in any way other than encouraging people to donate money responsibly?

Have you read the monthly bragging threads?

8Viliam_Bur
Different people may get different things. For example, I am very picky about people I spend my time with (always was, even before I found LW), and organizing local meetups helped me meet a few interesting people. As a side effect of LW, I stopped debating politics, which saves a lot of time and negative emotions; now I can spend the time on getting some free internet education, which so far didn't bring me further benefits, but at least feels better. But I imagine other people can get different things, and what I wrote here may be irrelevant for them.
7Algernoq
I get a feeling that I am smart and special. I also get interesting discussions/ideas. I also get distracted for hours.

"Art of Rationality" is an oxymoron. Art follows (subjective) aesthetic principles; rationality follows (objective) evidence.

Science follows objective evidence. You're not allowed to publish a paper where you conclude something based on a hunch, because anyone can claim they have a hunch. You can only do science with evidence that is undeniable. Not undeniably strong. You only need p = 0.05. But it has to be unquestionable that there really are those 4.3 bits of evidence.

Rationality follows subjective evidence. There often simply isn't enough... (read more)

-4Algernoq
I would equate rationality with logic. Thus, the (subjective) priors are an input to rationality. LW Rationality appears to mix in a few subjective priors with the rationality.
4philh
That's not what the word usually means on this site. You seem to be simultaneously objecting that (a) your idea of rationality is not optimal, and (b) LW rationality doesn't perfectly follow your idea of rationality.
2DanielLC
I wasn't talking about priors. If you have a hunch because something is simpler, then that would be priors, but if you have a hunch because you've been subconsciously collecting evidence too vague to be put into words, then reality is causing the hunch, so it's just evidence.

"Art of Rationality" is an oxymoron. Art follows (subjective) aesthetic principles; rationality follows (objective) evidence.

Ockham's razor is inherently an aesthetic principle. Between two explanations that both explain the data you have equally well you prefer one explanation over the other. Aesthetics matters in theoretical physics as a guiding principle.

A skill such as noticing confusion is also not directly about objective evidence.

I've read all of HPMOR and some of the sequences, attended a couple of meetups, am signed up for cryonics, and post here occasionally.

Out of curiosity, which meetup group was it, and what was that meetup like?

[-]V_V10

I read LessWrong primarily for entertainment value, but I share your concerns about some aspects of the surrounding culture, although in fairness it seems to have got better in recent years (at least as far as it is apparent from the online forum. I don't know about live events).
Specifically my points of concern are:

  • The "rationalist" identity: It creates the illusion that by identifying as a "rationalist" and displaying the correct tribal insignia you are automatically more rational, or at least "less wrong" than the outside

... (read more)

In recent years, under the direction of Luke Muehlhauser, with researchers such as Paul Christiano and the other younger guns, they may have got better, but I'm still waiting to see any technical result of theirs being published in a peer reviewed journal or conference.

http://intelligence.org/2014/05/17/new-paper-program-equilibrium-prisoners-dilemma-via-lobs-theorem/ :

We’ve released a new paper recently accepted to the MIPC workshop at AAAI-14: “Program Equilibrium in the Prisoner’s Dilemma via Löb’s Theorem” by LaVictoire et al.

http://intelligence.org/2014/05/06/new-paper-problems-of-self-reference-in-self-improving-space-time-embedded-intelligence/ :

We’ve released a new working paper by Benja Fallenstein and Nate Soares, “Problems of self-reference in self-improving space-time embedded intelligence.” [...]

Update 05/14/14: This paper has been accepted to AGI-14.

8V_V
Didn't know about that. Thanks for the update.
7[anonymous]
We only really agree on the first point. I'm skeptical of CFAR and the ritual crew but don't find these supposed comparisons to be particularly apt. I've watched MIRI improve their research program dramatically over the past four four years, and expect it to improve. Yes, obviously they had some growing pains in learning how to publish, but everyone who tries to do publishable work goes through that phase (myself included). I'm not on board with the fifth point: Well, 27.5% have a favorable opinion. The prior for it actually working seems optimistic but not overly so ("P(Cryonics): 22.8 + 28 (2, 10, 33) [n = 1500]"). At the least I'd say it's a controversial topic here, for all the usual reasons. (No, I'm not signed up for cryonics. No, I don't think it's very likely to work.) Most of the comments on What is the evidence in favor of paleo? are skeptical. The comment with highest karma is very skeptical. Lukeprog said he's skeptical and EY said it didn't work for him. Not really sure what you're referring to. Surprised you didn't bring up MWI; that's the usual hobby horse for this kind of criticism.
0V_V
Ok. I agree that it improved dramatically, but only because the starting point was so low. In recent years they released some very technical results. I think that some are probably wrong or trivial while others are probably correct and interesting, but I don't have the expertise to properly evaluate them, and this probably applies to most other people as well, which is why I think MIRI should seek peer-review by independent experts. As I said, these beliefs aren't necessarily held by a majority of lesswrongers, but are unusually common. MWI isn't pseudo-scientific per se. However, the claim that MWI is obviously true and whoever thinks otherwise must be ignorant or irrational is.
[-][anonymous]150

I agree that it improved dramatically, but only because the starting point was so low.

The starting point is always low. Your criticism applies to me, a mainstream, applied mathematics graduate student.

  • I started research in my area around 2009.
  • I have two accepted papers, both of which are relatively technical but otherwise minor results.

I also wasn't working on two massive popularization projects, obtaining funding, courting researchers (well, I flirted a little bit) and so on.

Applied math is widely regarded as having a low barrier to publication, with acceptable peer-review times in the six to eighteen month range. (Anecdote: My first paper took nine months from draft to publication; my second took seven months so far and isn't in print yet. My academic brother's main publication took twenty months.) I think it's reasonable to consider this a lower bound on publications in game theory, decision theory, and mathematical logic.

Considering this, even if MIRI had sought to publish some of their technical writings in independent journals, we probably wouldn't know if most of them had been either accepted or rejected by now. If things don't change in five years, then I'll concede that their research program hasn't been particularly effective.

2XiXiDu
I use the term "new rationalism".
2David_Gerard
I'd still really love a better term than that. One that doesn't use the R-word at all, if possible. ("Neorationalism" is tempting but similarly well below ideal.)
2Richard_Kennaway
"Pseudo-rationalism." Since that is exactly what is being claimed about it, one might as well put it in the name. It does use the R-word, but only to negate it, which is the point. "New rationalism" suggests there is something wrong with actually being rational, which I hope isn't anyone's intention in this thread.
2David_Gerard
Trouble is that echoes "pseudoskeptic", which is a term that should be useful but is overwhelmingly used only by those upset at their personal toe being stepped on ("critiquing me? You're doing skepticism wrong!"), to the point where it's a pretty useful crank detector.
5Richard_Kennaway
That is not a problem with the word but the thing. It does not matter what opposition to bad skepticism is called. If it exists as a definite idea, it will acquire a name, and whatever name it is called by will be used in that way. "New rationalism" is even worse: the name suggests not that there is such a thing as bad reasoning, but that reasoning is bad. Perhaps a better idea would be to not call it anything, nor make of it a thing. Instead, someone dissatisfied with how it is being done on LW might more fruitfully devote their energies to demonstrating how to do it better.
-1[anonymous]
Well, isn't that a self-evidently dangerous heuristic. ("Critiquing me? You're just doing the calling-me-a-pseudoskeptic crank behavior!")
0ChristianKl
I don't think that either armchair evopsych or the paleo movement are characterised by meta reasoning. Most individuals who believe in those things aren't on LW.
0ChristianKl
What exactly do you mean with buying into it? I think there are places on the internet with a lot more armchair evopsych than LW. Could you provide a link? I'm not aware of that ritual in LW if you mean something more than encouraging people to admit when they are wrong.
2V_V
Sure, but I'd expect that a community devoted to "refining the art of human rationality" would be more skeptical of that type of claims. Anyway, I'm not saying that LessWrong is a terribly diseased community. If I thought it was, I wouldn't be hanging around here. I was just expressing my concerns about some aspects of the local culture. https://www.google.com/search?q=less+wrong+ritual&ie=utf-8&oe=utf-8#channel=fs&q=ritual+report+site:lesswrong.com http://lesswrong.com/lw/9aw/designing_ritual/ And in particular the "Schelling Day", which bothers me the most: http://lesswrong.com/lw/h2t/schelling_day_a_rationalist_holiday/
2ChristianKl
In that case I think you overrate the amount of energy the average person in the community invest in it. LW is very diverse as far as opinions go. I myself dislike certain talk about signaling where sometimes armchair evopsych appear, but the idea of signaling is rooted in game theory. There are also people on LW who do read real evopysch and make arguments on that basis. I wasn't aware of Schelling Day.

Rationality doesn't guarantee correctness.

I think this point kind of corrupts what LW would generally call rationality. The rational path is the path that wins and this is mentioned constantly on LW.

Overall though, I think this is a decent critique.

ETA: I want to expand on my point. In your example about planning a car trip spending 25% of your time to shave 5% off your time is not what LW would call rationality.

You say "Many LWers seem to assume that being as rational as possible will solve all their life problems. It usually won't". ... (read more)

6Sophronius
To be fair Less Wrong's definition of rationality is specifically designed so that no reasonable person could ever disagree that more rationality is always good, thereby making the definition almost meaningless. And then all the connotations of the word still slip in of course. It's a cheap tactic also used in the social justice movement which Yvain recently criticized on his blog (motte and bailey I think it was called)

To clarify what I mean, take the following imaginary conversation:

Less Wronger: Hey! You seem smart. You should consider joining the Less Wrong community and learn to become more rational like us!
Normal: (using definition: Rationality means using cold logic and abstract reasoning to solve problems) I don't know, rationality seems overrated to me. I mean, all the people I know who are best at using cold logic and abstract reasoning to solve problems tend to be nerdy guys who never accomplish much in life.
Less Wronger: Actually, we've defined rationality to mean "winning", or "winning on purpose" so more rationality is always good. You don't want be like those crazy normals who lose on purpose, do you?
Normal: No, of course I want to succeed at the things I do.
Less Wronger: Great! Then since you agree that more rationality is always good you should join our community of nerdy guys who obsessively use cold logic and abstract reasoning in an attempt to solve their problems.

As usual with the motte and bailey, only the desired definition is used explicitly. However, the connotations with the second mundane use of the word slip in.

To be fair Less Wrong's definition of rationality is specifically designed so that no reasonable person could ever disagree that more rationality is always good, thereby making the definition almost meaningless.

In my experience, the problem is not with disagreeing, but rather that most people won't even consider the LW definition of rationality. They will use the nearest cliche instead, explain why the cliche is problematic, and that's the end of rationality discourse.

So, for me the main message of LW is this: A better definition of rationality is possible.

3DanielLC
It's not a different definition of rationality. It's a different word for winning. If they're not willing to use "rationality" that way, then just abandon the word.
1savageorange
We don't just use 'winning' because, well.. 'winning' can easily work out to 'losing' in real world terms. (think of a person who alienates everyone they meet through their extreme competitiveness. They are focused on winning, to the point that they sacrifice good relations with people. But this is both a) not what is meant by 'rationalists win' and b) a highly accessible definition of winning - naive "Competition X exists. Agent A wins, Agent B loses"). VASTLY more accessible than 'achieving what actually improves your life, as opposed to what you merely want or are under pressure to achieve' I'd like to use the word 'winning', but I think it conveys even less of the intended meaning than 'rationality' to the average person.
8ArisKatsaris
Yvain criticized switching definitions depending on whether you want to defend an easily defensible position, or have others accept an untenable position. With Lesswrong's definition of rationality (epistemic rationality the ability to arrive to true beliefs, instrumental rationality the ability to know how to achieve your goals) how is that happening?
6Luke_A_Somers
So what's the bailey, here? You make it seem like having obviously true premises is a bad thing. Note, a progressive series of less firmly held claims are NOT Motte and Bailey, if you aren't vacillating on what each means.
2DanielLC
It's a problem if anyone ends up sneaking in connotations.
5Luke_A_Somers
Yes, that's what an example would look like. Can anyone provide any?
2Algernoq
To paraphrase someone else's example, the motte is that science/reason helps people be right, and the bailey is that the LW memeplex is all correct and the best use of one's time (the memeplex including maximum support of abstract research about "friendly" AI, frequent attendance of LW self-help events, cryonics, and evangelizing Rationalism).
0Luke_A_Somers
Here's the problem with your attempting to apply Motte and Bailey to that: If challenged on those other things, we do not reply that 'rationalism is just science/reason helps people be right, how could you possibly oppose it?' Well, except for the last, which really seems like that actually addresses the problem. So, it's just a perfectly ordinary (and acceptable) sequence of progressively more controversial claims, and not a Motte-and-Bailey system.
2Algernoq
Different members act as different parts of the motte and bailey: some argue for extreme things; others say those extreme things are not "real" Rationalism
0Luke_A_Somers
That structure makes it not motte and bailey - the motte must be friendly to the bailey, not hostile to it!
3Dustin
What do you mean exactly by "specifically designed"? Anyway, I don't disagree with you exactly. My original point was not that the LW defnition of rationality was a good or bad definition, but that the definition Algernoq was asserting as the LW consensus definition of rationality was probably not actually true. ETA: I'm also not sure that I agree with you about the definition being useless, as I think the LW defintion seems designed specifically to counter thinking that leads to someone spending 25% of their time for a car trip planning to save 5%. By explicitly stating that rationality is about winning it helps to not get bogged down in the details and to remember what the point is. Whether or not the definition that has arisen is explicitly designed with that in mind, I can't say.
0Jiro
I don't understand this. You're saying that people spend 25% of their time planning the trip, and save 5% of their time on the trip? (Which is bad, but I doubt is that common)? Or they spend 25% of their time on the trip, and they plan to save 5% of their time on something else? (Which I also doubt is that common). Or that they spend 25% of their time on the trip, and they plan to save 5% of something else, like money? (Which may or may not be bad depending on how time translates to money). This does sound a little bit like the complaint that people spend 25% of the price of something (rather than of the time) on a car trip to save 5% on the price, but I've argued that that's a form of precommitting where as long as you precommit to buy at the store with the lowest price even if it's far away, nearby stores have an incentive to keep prices low.
2A1987dM
But if you take into account both price and location when deciding where to shop, stores will have an incentive not only to keep prices low but also to be near where people are!
0Jiro
Stores can't move closer to where all the people are, however; at some point any incentives from moving close to some people would be countered by moving away from other people. There's also the problem that past a certain density stores do better when farther away from other stores. Not to mention the transaction costs moving in the first place. Prices don't have these problems.
0Algernoq
All I'm saying is it looks like many people are being Rational because it's fun, not because it's useful.
0Dustin
I'm not particularly saying anything as I was just referring to the concept introduced in the main post. You'll have to ask Algernoq as to what the specific intention was.
[-][anonymous]-30

Rationality doesn't guarantee correctness.

What does? If there's a better way, we'd love to hear it. That's not sarcasm. It's the only thing of interest around here.

Many LWers are not very rational.

Now that's just mean.

[This comment is no longer endorsed by its author]Reply
3[anonymous]
Redacted my post. Doesn't add to the conversation. Do, however, try not to conflate LessWrong with "rationality." Rationality is a method of approaching cognitive algorithms. LessWrong is a community that happens to focus on these methods a lot. Conflating them is like conflating the Democratic party with socialism (to choose a flippant, possibly ill-advised example). It makes a caricature of the former and diminishes the latter.