I've read all of HPMOR and some of the sequences, attended a couple of meetups, am signed up for cryonics, and post here occasionally. But, that's as far as I go.
That's further than I go. Heck, what else is there, and why worry about whether you're going there or not?
I have also translated the Sequences, and organized a couple of meetups. :)
Here are some other things someone could do to go further:
Actually, PJ, I do consider your contributions to motivation and fighting akrasia very valuable. I wish they could someday become a part of an official rationality training (the hypothetical kind of training that would produce visible awesome results, instead of endless debates whether LW-style rationality actually changes something).
- join a polyamorous community;
- start a local polyamorous community;
Seriously? What does that have to do with anything?
In my experience with the LW community, they see polyamory as an equally valid alternative to monogamy. Many practice, many don't, and poly people include those with children and those without.
Affirm. It touches on cognitive skills only insofar as mild levels of "resist conformity" and "notice what your emotions actually are" are required for naturally-poly people to notice this and act on it (or for naturally-mono or okay-with-either people to figure out what they are if it ever gets called into question), and mild levels of "calm discussion" are necessary to talk about it openly without people getting indignant at you. Poly and potential poly people have a standard common interest in some rationality skills, but figuring out whether you're poly and acting on it seems to me like a very bounded challenge---like atheism, or making fun of homeopathy, it's not a cognitive challenge around which you could build a lasting path of personal growth.
While I've remained monogomas myself, it's purely for time and efficiency reasons
Worst Valentine's Day card ever.
I feel like the more important question is: How specifically has LW succeeded to make this kind of impression on you? I mean, are we so bad at communicating our ideas? Because many things you wrote here seem to me like quite the opposite of LW. But there is a chance that we really are communicating things poorly, and somehow this is an impression people can get. So I am not really concerned about the things you wrote, but rather about a fact that someone could get this impression. Because...
Rationality doesn't guarantee correctness.
Which is why this site is called "Less Wrong" in the first place. (Instead of e.g. "Absolutely Correct".) On many places in Sequences it is written that unlike the hypothetical perfect Bayesian reasoner, human are pretty lousy at processing available evidence, even when we try.
deciding what to do in the real world requires non-rational value judgments
Indeed, this is why a rational paperclip maximizer would create as many paperclips as possible. (The difference between irrational and rational paperclip maximizers is that the latter has a better model of the world, and thus probably succeeds to create more paperclips on average....
WTF?! Please provide an evidence of LW encouraging PhD students at top-10 universities to drop out of their PhD program to go to LW "training camps" (which by the way don't take a few months).
When I visited MIRI one of the first conversations I had with someone was them trying to convince me not to pursue a PhD. Although I don't know anything about the training camp part (well, I've certainly been repeatedly encouraged to go to a CFAR camp, but that is only a weekend and given that I teach for SPARC it seems like a legitimate request).
Convincing someone not to pursue a PhD is rather different than convincing someone to drop out of a top-10 PhD program to attend LW training camps. The latter does indeed merit the response WTF.
Also, there are lots of people, many of them graduate students and PhD's themselves, who will try to convince you not to do a PhD. Its not an unusual position.
I mean, are we so bad at communicating our ideas?
I find this presumption (that the most likely cause for disagreement is that someone misunderstood you) to be somewhat abrasive, and certainly unproductive (sorry for picking on you in particular, my intent is to criticize a general attitude that I've seen across the rationalist community and this thread seems like an appropriate place). You should consider the possibility that Algernoq has a relatively good understanding of this community and that his criticisms are fundamentally valid or at least partially valid. Surely that is the stance that offers greater opportunity for learning, at the very least.
I certainly considered that possibility and then rejected it. (If there are more 2 regular commenters here who think that rationality guarantees correctness and will solve all of their lives problems, I will buy a hat and then eat it).
I have come across serious criticism of the PhD programs at major universities, here on LW (and on OB). This is not quite the same as a recommendation to not enroll for a PhD, and it most certainly is not the same as a recommendation to quit from an ongoing PhD track, but I definitely interpreted such criticism as advice against taking such a PhD. Then again I have also heard similar criticism from other sources, so it might well be a genuine problem with some PhD tracks.
For what it's worth my personal experiences with the list of main points (not sure if this should be a separate post, but I think it is worth mentioning):
Rationality doesn't guarantee correctness.
Indeed, but as Villiam_Bur mentions this is way too high a standard. I personally notice that while not always correct I am certainly correct more often thanks to the ideas and knowledge I found at LW!
In particular, AI risk is overstated
I am not sure but I was under the impression that your suggestion of 'just building some AI, it doesn't have to be perfect right away' is the thought that researchers got stuck on last century (the problem being that even making a dumb prototype was insanely complicated), when peopl...
I have been contemplating this point. One of the things that sets off red flags for people outside a group is when people in the group appear to have cut'n'pasted the leader's opinions into their heads. And that's definitely something that happens around LW.
The failure mode might be that it's not obvious that an autodidact who spent a decade absorbing relevant academic literature will have a very different expressive range than another autodidact who spent a couple months reading the writings of the first autodidact. It's not hard to get into the social slot of a clever outsider because the threshold for cleverness for outsiders isn't very high.
The business of getting a real PhD is pretty good at making it clear to most people that becoming an expert takes dedication and work. Internet forums have no formal accreditation, so there's no easy way to distinguish between "could probably write a passable freshman term paper" knowledgeable and "could take some months off and write a solid PhD thesis" knowledgeable, and it's too easy for people in the first category to be unaware how far they are from the second category.
The PhD student dropping out of a top-10 school to try to do a startup after attending a month-long LW event I heard secondhand from a friend. I will edit my post to avoid spreading rumors, but I trust the source.
If it did happen, then I want to know that it happened. It's just that this is the first time I even heard about a month-long LW event. (Which may be an information about my ignorance -- EDIT: it was, indeed --, since till yesterday I didn't even know SPARC takes two weeks, so I thought one week was a maximum for an LW event.)
I heard a lot of "quit the school, see how successful and rich Zuckerberg is" advice, but it was all from non-LW sources.
I can imagine people at some LW meetup giving this kind of advice, since there is nothing preventing people with opinions of this kind to visit LW meetups and give advice. It just seems unlikely, and it certainly is not the LW "crowd wisdom".
Here's the program he went to, which did happen exactly once. It was a precursor to the much shorter CFAR workshops: http://lesswrong.com/lw/4wm/rationality_boot_camp/
That said, as his friend I think the situation is a lot less sinister than it's been made out to sound here. He didn't quit to go to the program, he quit a year or so afterwards to found a startup. He wasn't all that excited about his PHD program and he was really excited about startups, so he quit and founded a startup with some friends.
LW has a cult-like social structure. ...
Where the evidence for this is:
Appealing to people based on shared interests and values. Sharing specialized knowledge and associated jargon. Exhibiting a preference for like minded people. More likely to appeal to people actively looking to expand their social circle.
Seems a rather gigantic net to cast for "cults".
Well, there's this:
However, involvement in LW pulls people away from non-LWers.
But that is similarly gigantic -- on this front, in my experience LW isn't any worse than, say, joining a martial arts club. The hallmark of cultishness is that membership is contingent on actively cutting off contact with non-cult members.
Art of Rationality" is an oxymoron. Art follows (subjective) aesthetic principles; rationality follows (objective) evidence.
Art in the other sense of the word. Think more along the lines of skills and practices.
I think "art" here is mainly intended to call attention to the fact that practical rationality's not a collection of facts or techniques but something that has to be drilled in through deliberate long-term practice: otherwise we'd end up with a lot of people that can quote the definitions of every cognitive bias in the literature and some we invented, but can't actually recognize when they show up in their lives. (YMMV on whether or not we've succeeded in that respect.)
Some of the early posts during the Overcoming Bias era talk about rationality using a martial arts metaphor. There's an old saying in that field that the art is 80% conditioning and 20% technique; I think something similar applies here. Or at least should.
(As an aside, I think most people who aren't artists -- martial or otherwise -- greatly overstate the role of talent and aesthetic invention in them, and greatly underestimate the role of practice. Even things like painting aren't anywhere close to pure aesthetics.)
Would it be fair to characterize most of your complaints as roughly "Less Wrong focuses too much on truth seeking and too little on instrumental rationality - actually achieving material success"?
In that case, I'm afraid your goals and the goals of many people here may simply be different. The common definition of rationality here is "systematic winning". However, this definition is very fuzzy because winning is goal dependent. Whether you are "winning" is dependent on what your goals and values are.
Can't speak for anyone else, but the reason why I am here is because I like polite but vigorous discussion. Its nice to be able to discuss topics with people on the internet in a way that does not drive me crazy. People here are usually open to new ideas, respectful, yet also uncompromising in the force of their arguments. Such an environment is much more helpful to me in learning about the world than the adversarial nature of most forum discussions. My goal in reading LessWrong is mostly finding likeminded people who I can talk to, share ideas with, learn form and disagree with, all without any bad feelings. That is a rare thing.
If your goal is achieving material success there are certainly very general tools and skills you can learn like getting over procrastination, managing your emotional state, or changing your value system to achieve your goals. CFAR...
Note that opinions differ on this topic, e.g. someone recently referred to LW as a "signaling and self-help cesspit" and got upvoted. Personally, I like seeing self-help stuff and I would encourage you to be the change you want to see :)
[I]nvolvement in LW pulls people away from non-LWers. One way this happens is by encouraging contempt for less-rational Normals. [...] LW recruiting (hpmor, meetup locations near major universities) appears to target socially awkward intellectuals (incl. me) who are eager for new friends and a "high-status" organization to be part of, and who may not have many existing social ties locally.
I think you've got the causation going the wrong way here. LW does target a lot of socially awkward intellectuals. And a lot of LWers do harbor some contempt for their "less rational" peers. I submit, however, that this is not because they're LWers but rather because they're socially awkward intellectuals.
American geek culture has a strong exclusionist streak: "where were you when I was getting beaten up in high school?" Your average geek sees himself (using male pronouns here because I'm more familiar with the male side of the culture) as smarter and morally purer than Joe and Jane Sixpack -- who by comparison are cast as lunkish, thoughtless, cruel, but attractive and socially successful -- and as having suffered for that, which in turn justifies treating th...
involvement in LW pulls people away from non-LWers. One way this happens is by encouraging contempt for less-rational Normals.
Alternative hypothesis: Once a certain kind of person realizes that something like the LW community is possible and even available, they will gravitate towards it - not because LW is cultish, but because the people, social norms, and ideas appeal to them, and once that kind of interaction is available, it's a preferred substitute for some previously engaged-in interaction. From the outside, this may look like contempt for Normals. But from personal experience, I can say that form the inside it feels like you've been eating gruel all your life, and that's what you were used to, but then you discovered actual delicious food and don't need to eat gruel anymore.
Yes, it's rather odd to call a group of like minded people a cult because they enjoy and prefer each other's company.
In grad school I used to be in a couple of email lists that I enjoyed because of the quality of the intellectual interaction and the topics discussed, one being Extropians in the 90s. I'd given that stuff up for a long time.
Got back into it a little a few years ago. I had been spending time at a forum or two, but was getting bored with them primarily because of the low quality of discussion. I don't know how I happened on HPMOR, but I loved it, and so naturally came to the site to take a look. Seeing Jaynes, Pearl, and The Map is not the Territory served as good signaling to me of some intellectual taste around here.
I didn't come here and get indoctrinated - I saw evidence of good intellectual taste and that gave me the motivation to give LW a serious look.
This is one suggestion I'd have for recruiting. Play up canonical authors more. Jaynes, Kahneman, and Pearl convey so much more information than bayesian analysis, cognitive biases, and causal analysis. None of those guys are the be all and end all of their respective fields, but identifying them plants a flag where we see value that can attract similarly minded people.
I've debated myself about writing a detailed reply, since I don't want to come across as some brainwashed LW fanboi. Then I realized this was a stupid reason for not making a post. Just to clarify where I'm coming from.
I'm in more-or-less the same position as you are. The main difference being that I've read pretty much all of the Sequences (and am slowly rereading them) and I haven't signed up for cryonics. Maybe those even out. I think we can say that our positions on the LW - Non-LW scale are pretty similar.
And yet my experience has been almost completely opposite of yours. I don't like the point-by-point response on this sort of thing, but to properly respond and lay out my experiences, I'm going to have to do it.
Rationality doesn't guarantee correctness.
I'm not going to spend much time on this one, seeing as how pretty much everyone else commented on this part of your post.
Some short points, though:
...Given some data, rational thinking can get to the facts accurately, i.e. say what "is". But, deciding what to do in the real world requires non-rational value judgments to make any "should" statements. This is in a part of the Sequences you've probably hav
Hi Algernoq,
Thanks for writing this. This sentence particularly resonated:
LW members who are conventionally successful (e.g. PhD students at top-10 universities) typically became so before learning about LW, and the LW community may or may not support their continued success (e.g. may encourage them, with only genuine positive intent, to spend a lot of time studying Rationality instead of more specific skills).
I was definitely explicitly discouraged from pursuing a PhD by certain rationalists and I think listening to their advice would have been one of the biggest mistakes of my life. Unfortunately I see this attitude continuing to be propagated so I am glad that you are speaking out against it.
EDIT: Although, it looks like you've changed my favorite part! The text that I quoted the above was not the original text (which talked more about dropping out of PhD and starting a start-up).
I interpreted "the best (funded) PhD program you got into" to mean 'the best PhD program that offered you a funded place', rather than 'the best-funded PhD program that offered you a place'. So Algernoq's advice need not conflict with yours, unless he did mean 'best' in a very narrow sense.
Thanks for being bold enough to share your dissenting views. I'm voting you up just for that, given the reasoning I outline here.
I think you are good job detaching the ideas of LW that you think are valuable and adopting them and ditching the others. Kudos. Overall, I'm not sure about the usefulness of debating the goodness or badness of "LW" as a single construct. It seems more useful to discuss specific ideas and make specific criticisms. For example, I think lukeprog offered a good specific criticism of LW thinking/social norms here. In general, if people take the time to really think clearly and articulate their criticisms, I consider that extremely valuable. On the opposite end of the spectrum, if someone says something like "LW seems weird, and weird things make me feel uncomfortable" that is not as valuable.
I'll offer a specific criticism: I think we should de-emphasize the sequences in the LW introductory material (FAQ, homepage, about page). (Yes, I was the one who wrote most of the LW introductory material, but I was trying to capture the consensus of LW at the time I wrote it, and I don't want to change it without the change being a consensus ...
Where available, I would emphasize the original source material over the sequence rehash of them.
This would greatly lower the Phyg Phactor, limit in group jargon, better signal to outsiders who also value that source material, and possibly create ties to other existing communities.
Where available, I would emphasize the original source material over the sequence rehash of them.
Needed: LW wiki translations of LW jargon into the proper term in philosophy. (Probably on the existing jargon page.)
I strongly disagree with this. I don't care about cult factor: The sequences are vastly more readable than the original sources. Almost every time I've tried to read stuff a sequence post is based on I've found it boring and given up. The original sources already exist and aren't attracting communities of new leaders who want to talk about and do stuff based on them! We don't need to add to that niche. We are in a different niche.
I read LW for entertainment, and I've gotten some useful phrases and heuristics from it, but the culture bothers me (more what I've seen from LWers in person than on the site). I avoid "rationalists" in meatspace because there's pressure to justify my preferences in terms of a higher-level explicit utility function before they can be considered valid. People of similar intelligence who don't consider themselves rationalists are much nicer when you tell them "I'm not sure why, but I don't feel like doing xyz right now." (To be fair, my sample is not large. And I hope it stays that way.)
FWIW, I have the opposite experience with online versus offline.
I avoid "rationalists" in meatspace because there's pressure to justify my preferences in terms of a higher-level explicit utility function before they can be considered valid.
It wouldn't surprise me at all to see this on the website, but I wouldn't expect it to happen in meatspace.
(Obviously meetups vary, but I help organize the London meetup, I went to the European megameetup, I went to CFAR, and I've spent a small amount of time with the SF/Berkeley crowd.)
This is going to sound like a stupid excuse... okay, instead of the originally planned excuse, let me just give you an example of what happened to me a week or two ago...
I wrote an introductory article about LW-style rationality in Slovak language on a website where it quickly got 5000 visitors. (link) About 30 of them wrote something in a discussion below the article, some of them sent me private messages about how they like what I wrote, and some of them "friended" me on Facebook.
The article was mostly about that reality exists and map is not the territory, and how politics is the mindkiller. With specific examples about how the politics is the mindkiller, and mentioning the research about how political opinions reduced subjects' math abilities.
One guy who "friended" me because of this article... when I looked at his page, it was full of political conspiracy theories. He published a link to some political conspiracy theory article every few hours. (Judging from the context, he meant it seriously.) When I had him briefly in the friend list (because I clicked "okay" without checking his page first), my Facebook homepage turned mostly to a list of consp...
So, specifically with respect to "cult' and "elitist" observations I see, in general, I would like to offer a single observation:
"Tsuyoku naritai" isn't the motto of someone trying to conform to some sort of weird group norm. It's not the motto of someone who hates people who have put in less time or effort than himself. It's the recognition that it is possible to improve, and the estimation that improving is a worthwhile investment.
If your motivation for putting intellectual horsepower into this site isn't that, I'd love to hear ...
Your criticism of rationality for not guaranteeing correctness is unfair because nothing can do that. Your criticism that rationality still requires action is equivalent to saying that a driver's license does not replace driving, though many less wrongers do overvalue rationality so I guess I agree with that bit. You do however seem to make a big mistake in buying into the whole fact- value dichotomy, which is a fallacy since at the fundamental level only objective reality exists. Everything is objectively true or false, and the fact that rationality canno...
I feel like everyone in this community has ridiculous standards for what the community should look like in order to be considered a success. Considering the demographics Less Wrong pulls from, I consider LW to be the experimental group where r/atheism is the control group.
I basically agree with this post, with some exceptions like:
My strong suspicion is that the best way to reduce existential risk is to build (non-nanotech) self-replicating robots using existing technology and online ordering of materials
But for the moment I will keep reading LessWrong sometimes. This is because of useful guides like "Lifestyle interventions to increase longevity" and "Political Skills which Increase Income" and also that the advice I've gotten has often been better than on Quora. And I do like the high quality evidence-based discussion of charitable/social interventions.
Rationality doesn't guarantee correctness
That's a strawman. I don't think a majority of LW thinks that's true.
In particular, AI risk is overstated
The LW consensus on the matter of AI risk isn't that it's the biggest X-risk. If you look at the census you will find that different community members think different X-risks are the biggest and more people fear bioengineered pandemics than an UFAI event.
...LW community may or may not support their continued success (e.g. may encourage them, with only genuine positive intent, to drop out of their PhD progr
Yeah, this is all true. In any helpful community, there will be some drawbacks and red flags. The question is always if engaging in the community is the highest expected value you can get. For most people, I think the answer is obviously no.
Less wrong should really be viewed as an amusing diversion, which can be useful in certain situations (this weekend I did calibration training, would have been hard to find people who wanted to join without LW). I think people for the most part aren't on here because they think this is the absolute best use of their time, or that it's a perfect community that has no drawbacks or flaws.
To be fair, most online communities aren't an especially good use of your time if you're an ambitious, driven person.
Replacing my original comment with this question:
What has Lesswrong done for you?
We talk about strengthening the community, etc. But what does LW actually do? What do LWers get out of it? What about value Vs. time spent with LW? Ex, if you got here in 2011, was most of the value concentrated in 2011? Has it trickled out over time?
Do we accomplish things? Are we some kinda networking platform for pockets of smart people spread out across the globe? Do we contribute to the world in any way other than encouraging people to donate money responsibly?
This is not...
Do we contribute to the world in any way other than encouraging people to donate money responsibly?
You say that like it isn't a big contribution.
Do we accomplish things? Are we some kinda networking platform for pockets of smart people spread out across the globe? Do we contribute to the world in any way other than encouraging people to donate money responsibly?
Have you read the monthly bragging threads?
"Art of Rationality" is an oxymoron. Art follows (subjective) aesthetic principles; rationality follows (objective) evidence.
Science follows objective evidence. You're not allowed to publish a paper where you conclude something based on a hunch, because anyone can claim they have a hunch. You can only do science with evidence that is undeniable. Not undeniably strong. You only need p = 0.05. But it has to be unquestionable that there really are those 4.3 bits of evidence.
Rationality follows subjective evidence. There often simply isn't enough...
"Art of Rationality" is an oxymoron. Art follows (subjective) aesthetic principles; rationality follows (objective) evidence.
Ockham's razor is inherently an aesthetic principle. Between two explanations that both explain the data you have equally well you prefer one explanation over the other. Aesthetics matters in theoretical physics as a guiding principle.
A skill such as noticing confusion is also not directly about objective evidence.
I've read all of HPMOR and some of the sequences, attended a couple of meetups, am signed up for cryonics, and post here occasionally.
Out of curiosity, which meetup group was it, and what was that meetup like?
I read LessWrong primarily for entertainment value, but I share your concerns about some aspects of the surrounding culture, although in fairness it seems to have got better in recent years (at least as far as it is apparent from the online forum. I don't know about live events).
Specifically my points of concern are:
The "rationalist" identity: It creates the illusion that by identifying as a "rationalist" and displaying the correct tribal insignia you are automatically more rational, or at least "less wrong" than the outside
In recent years, under the direction of Luke Muehlhauser, with researchers such as Paul Christiano and the other younger guns, they may have got better, but I'm still waiting to see any technical result of theirs being published in a peer reviewed journal or conference.
http://intelligence.org/2014/05/17/new-paper-program-equilibrium-prisoners-dilemma-via-lobs-theorem/ :
We’ve released a new paper recently accepted to the MIPC workshop at AAAI-14: “Program Equilibrium in the Prisoner’s Dilemma via Löb’s Theorem” by LaVictoire et al.
We’ve released a new working paper by Benja Fallenstein and Nate Soares, “Problems of self-reference in self-improving space-time embedded intelligence.” [...]
Update 05/14/14: This paper has been accepted to AGI-14.
I agree that it improved dramatically, but only because the starting point was so low.
The starting point is always low. Your criticism applies to me, a mainstream, applied mathematics graduate student.
I also wasn't working on two massive popularization projects, obtaining funding, courting researchers (well, I flirted a little bit) and so on.
Applied math is widely regarded as having a low barrier to publication, with acceptable peer-review times in the six to eighteen month range. (Anecdote: My first paper took nine months from draft to publication; my second took seven months so far and isn't in print yet. My academic brother's main publication took twenty months.) I think it's reasonable to consider this a lower bound on publications in game theory, decision theory, and mathematical logic.
Considering this, even if MIRI had sought to publish some of their technical writings in independent journals, we probably wouldn't know if most of them had been either accepted or rejected by now. If things don't change in five years, then I'll concede that their research program hasn't been particularly effective.
Rationality doesn't guarantee correctness.
I think this point kind of corrupts what LW would generally call rationality. The rational path is the path that wins and this is mentioned constantly on LW.
Overall though, I think this is a decent critique.
ETA: I want to expand on my point. In your example about planning a car trip spending 25% of your time to shave 5% off your time is not what LW would call rationality.
You say "Many LWers seem to assume that being as rational as possible will solve all their life problems. It usually won't". ...
To clarify what I mean, take the following imaginary conversation:
Less Wronger: Hey! You seem smart. You should consider joining the Less Wrong community and learn to become more rational like us!
Normal: (using definition: Rationality means using cold logic and abstract reasoning to solve problems) I don't know, rationality seems overrated to me. I mean, all the people I know who are best at using cold logic and abstract reasoning to solve problems tend to be nerdy guys who never accomplish much in life.
Less Wronger: Actually, we've defined rationality to mean "winning", or "winning on purpose" so more rationality is always good. You don't want be like those crazy normals who lose on purpose, do you?
Normal: No, of course I want to succeed at the things I do.
Less Wronger: Great! Then since you agree that more rationality is always good you should join our community of nerdy guys who obsessively use cold logic and abstract reasoning in an attempt to solve their problems.
As usual with the motte and bailey, only the desired definition is used explicitly. However, the connotations with the second mundane use of the word slip in.
To be fair Less Wrong's definition of rationality is specifically designed so that no reasonable person could ever disagree that more rationality is always good, thereby making the definition almost meaningless.
In my experience, the problem is not with disagreeing, but rather that most people won't even consider the LW definition of rationality. They will use the nearest cliche instead, explain why the cliche is problematic, and that's the end of rationality discourse.
So, for me the main message of LW is this: A better definition of rationality is possible.
Rationality doesn't guarantee correctness.
What does? If there's a better way, we'd love to hear it. That's not sarcasm. It's the only thing of interest around here.
Many LWers are not very rational.
Now that's just mean.
A common question here is how the LW community can grow more rapidly. Another is why seemingly rational people choose not to participate.
I've read all of HPMOR and some of the sequences, attended a couple of meetups, am signed up for cryonics, and post here occasionally. But, that's as far as I go. In this post, I try to clearly explain why I don't participate more and why some of my friends don't participate at all and have warned me not to participate further.
Rationality doesn't guarantee correctness. Given some data, rational thinking can get to the facts accurately, i.e. say what "is". But, deciding what to do in the real world requires non-rational value judgments to make any "should" statements. (Or, you could not believe in free will. But most LWers don't live like that.) Additionally, huge errors are possible when reasoning beyond limited data. Many LWers seem to assume that being as rational as possible will solve all their life problems. It usually won't; instead, a better choice is to find more real-world data about outcomes for different life paths, pick a path (quickly, given the time cost of reflecting), and get on with getting things done. When making a trip by car, it's not worth spending 25% of your time planning to shave off 5% of your time driving. In other words, LW tends to conflate rationality and intelligence.
In particular, AI risk is overstated There are a bunch of existential threats (asteroids, nukes, pollution, unknown unknowns, etc.). It's not at all clear if general AI is a significant threat. It's also highly doubtful that the best way to address this threat is writing speculative research papers, because I have found in my work as an engineer that untested theories are usually wrong for unexpected reasons, and it's necessary to build and test prototypes in the real world. My strong suspicion is that the best way to reduce existential risk is to build (non-nanotech) self-replicating robots using existing technology and online ordering of materials, and use the surplus income generated to brute-force research problems, but I don't know enough about manufacturing automation to be sure.
LW has a cult-like social structure. The LW meetups (or, the ones I experienced) are very open to new people. Learning the keywords and some of the cached thoughts for the LW community results in a bunch of new friends and activities to do. However, involvement in LW pulls people away from non-LWers. One way this happens is by encouraging contempt for less-rational Normals. I imagine the rationality "training camps" do this to an even greater extent. LW recruiting (hpmor, meetup locations near major universities) appears to target socially awkward intellectuals (incl. me) who are eager for new friends and a "high-status" organization to be part of, and who may not have many existing social ties locally.
Many LWers are not very rational. A lot of LW is self-help. Self-help movements typically identify common problems, blame them on (X), and sell a long plan that never quite achieves (~X). For the Rationality movement, the problems (sadness! failure! future extinction!) are blamed on a Lack of Rationality, and the long plan of reading the sequences, attending meetups, etc. never achieves the impossible goal of Rationality (impossible because "is" cannot imply "should"). Rationalists tend to have strong value judgments embedded in their opinions, and they don't realize that these judgments are irrational.
LW membership would make me worse off. Though LW membership is an OK choice for many people needing a community (joining a service organization could be an equally good choice), for many others it is less valuable than other activities. I'm struggling to become less socially awkward, more conventionally successful, and more willing to do what I enjoy rather than what I "should" do. LW meetup attendance would work against me in all of these areas. LW members who are conventionally successful (e.g. PhD students at top-10 universities) typically became so before learning about LW, and the LW community may or may not support their continued success (e.g. may encourage them, with only genuine positive intent, to spend a lot of time studying Rationality instead of more specific skills). Ideally, LW/Rationality would help people from average or inferior backgrounds achieve more rapid success than the conventional path of being a good student, going to grad school, and gaining work experience, but LW, though well-intentioned and focused on helping its members, doesn't actually create better outcomes for them.
"Art of Rationality" is an oxymoron. Art follows (subjective) aesthetic principles; rationality follows (objective) evidence.
I desperately want to know the truth, and especially want to beat aging so I can live long enough to find out what is really going on. HPMOR is outstanding (because I don't mind Harry's narcissism) and LW is is fun to read, but that's as far as I want to get involved. Unless, that is, there's someone here who has experience programming vision-guided assembly-line robots who is looking for a side project with world-optimization potential.