I don't know much about the community beyond what's evident on LessWrong, but I've often felt like there's an undercurrent here of people tending towards a certain degree of selfishness (moral irrealism plus consequentialism plus "rationality is about winning" together makes a somewhat machiavellian personality lots of nice excuses), as well as messiah complexes which are not only somewhat destructive to the mental health of those that have them but also feed into that ego pattern even more (we're saving the world! only smart rationalists can understand! no point in trying to talk about alignment with normies because they're useless and can't help! the entire burden of saving the world is on my shoulders!!).
In general... this may be a place to go for good reasoning, but not for sanity in a more absolute sense. The emotional and social intelligence here, and indeed to some extent the "moral intelligence" is... not always adequate.
I've also noticed those tendencies, not in the community but in myself.
Selfishness. Classification of people as "normies." Mental health instability. Machiavellianism.
But...
They get stronger as I look at the world like a rationalist. You read books like Elephant in the Brain and find yourself staring at a truth you don't want to see. I wish God were real. I wish I were still a Christian with those guardrails erected to prevent me from seeing the true nature of the world.
But the more I look, the more like it looks like a non-moral, brutally unfair, unforgiv...
good news on the moral front: prosocial moral intuitions are in fact a winning strategy long term. we're in a bit of a mess short term. but, solidarity and co-protection are good strategies; radical transparency can be an extremely effective move; mutual aid has always been a factor of evolution; the best real life game theory strategies tend to look like reputational generous tit for tat with semirandom forgiveness, eg in evolutionary game theory simulations; etc. Moral realism is probably true but extremely hard to compute. If we had a successful co-protective natural language program, it would likely be verifiably true and look like well known moral advice structured in a clear and readable presentation with its mathematical consequences visualized for all to understand.
I really like https://microsolidarity.cc as an everyday life intro to this, and I tossed this comment into metaphor.systems up to the opening bracket of this link. here are some very interesting results, various middle to high quality manifestos and quick overviews of ethics:
To be blunt... our founder's entire personality. (And current extreme burnout and evident depression.)
Also, I will not name names, but I know of at least one person who over DMs mentioned rendering their meat eating consistent with their other moral views by deciding that any entities without the cognition of a fully self-aware human have no moral rights, and was strongly considering whether it would be ethically acceptable to eat children and mentally disabled people. I found this disturbing enough to block them.
That's not quite an example of the specific things I mentioned, but it is an example of the rationality subculture tending to veer away from what I suppose has to be called "common sense" or "consensus reality". (Acausal reasoning and anthropics both also seem like examples of this. However "rational" they are, they are dangerous ideas that imo pose a cognitohazard.)
Actually, in the interests of full honesty, I have to give myself as an example. I didn't know about the rationalist community until I was like 20, but throughout my teens I basically was a rationalist without knowing it - and also mentally ill and coping with emotional disturbances using a lot of narcissistic ...
I call it the moderately-above-average syndrome.
Someone with Einstein smarts or Napoleon level wiles, and with delusions of grandeur, seem to get along fine, at least judging by history.
But folks that are only 2 or 3 standard deviations above average, and who maintain similar pretences, inevitably come out a bit unbalanced.
There's also a similar concept in sociology with the anxious upper-middle classes.
Low scholarship (not mainly the academic kind) due to lack of slack from prioritizing the wrong winning metrics (money and status over time). In general, an optimization frame often falls into the trap of fine tuning existing considerations instead of seeking new considerations.
Let me attempt to paraphrase.
There are two problems: 1) not spending enough time on scholarship and 2) not having enough slack. These two problems are separate in the sense that 2 would be a problem even if 1 was solved and vice versa, but related in the sense that 2 is a big reason why 1 is a problem in the first place. And maybe 3) is another problem: that we spend too much time on existing considerations instead of seeking new considerations (exploiting instead of exploring).
Does that sound accurate?
If so, not that this adds much to the conversation, bu...
What sorts of queries on which knowledge retrievers would you suggest for learning more about this from the perspective you're seeing as lacking? if it's useful for answering this, my favorite search engines are arxivxplorer, semanticscholar's recommender, metaphor, and I also sometimes ask claude or chatgpt to describe a concept to help me identify search terms. using that set of tools, what would you suggest looking up to find links I can provide to others as an intro to scholarship? I have plenty of my own ideas for what to look up, to be clear.
What are you comparing to? It is only compared to what you would want rationalist culture to be like, or do you have examples of other cultures (besides academia) that do better in this regard?
The only thing chess club members have to do to participate is to organize or play in chess matches. The only thing computer security club members have to do to participate is (usually) to help organize or play computer hacking challenges. The only thing you have to do to get involved in the Christian Community is to go to church and maybe attend a few church functions.
AFAICT, the only obvious way to participate in and contribute to rationalist culture is to write insightful posts on LessWrong, in the same way that the only way to get involved with the SCPWiki is to write SCPs. But the bar for doing that in a prosocial and truthful way is now pretty high, and was always going to effect a power law, with a few very intelligent founding members contributing most of the canon. It's not that they're doing anything wrong (I love their content), it's just naturally what happens.
Most of the problems I see on LessWrong lie downstream of this. Regular non-Google, non-finance software engineers face this dilemma of either staying silent and never getting to interact with the community, saying something that's been said before, indulging in one of their biases, or unfairly criticizing existing works and members. For very unconscientious people this means completely throwing away existing guardrails and deontology because that's the only way they can think to differentiate themselves from Eliezer and carve a niche.
I was able to get involved in rationality by going to in-person meetups. I suggest, if you're feeling left-out, you do the same (or create in-person meetups yourself!).
Edit: There also exist various rationalist discords you could join. They're usually fun, and don't require you to make a post.
Regular non-Google, non-finance software engineers face this dilemma of either staying silent and never getting to interact with the community, saying something that's been said before, indulging in one of their biases, or unfairly criticizing existing works and members.
I'm glad you point this out. I think it is both real and important. However, I don't think it has to be that way! It's always been sort of a pet peeve of mine. "Normal" people can participate in so many ways. Here is what comes to my mind right now but definitely isn't exhaustive:
...I think it's wrong to think there is a "rationalist culture". There are rationalist influences and tropes that are part of a number of distinct groups' habits and norms, but that doesn't make those groups similar enough to be called a cohesive single culture.
Disagreed, but curious.
My sense is that the differences are relatively minor and that there are a lot of really strong things that tie all the groups together: various things discussed in The Sequences like Bayesian thinking and cognitive science. What are the large differences you see with various groups?
I'm not sure I know what rationalist culture refers to anymore. Several candidate referents have become blurred and new candidates have been introduced. Could be, lesswrong.com culture; humanity's rationalist cultures of various stripes; the rationalist cultures descended from lesswrong (but those are many at this point); the sequences view; the friend networks I have (which mostly don't have the problems I'd complain about, since I filter my friends for people I want to be friends with!); the agi safety research field (which seems to be mostly not people who think of themselves as "rationalists" anymore); berkeley rat crowd; "rationalist-adjacent" people on twitter; the thing postrats say is rationalist; a particular set of discords; some other particular set of discords; scott alexander fans; some vague combination of things I've mentioned; people who like secular solstice...
straw vulcan is more accurate than people give it credit for. a lot of people around these parts undervalue academia's output and independent scholarship and reinvent a lot of stuff. folks tend to have an overly reductive view of politics, either overly "only individuals exist and cannot be aggregated" or "only the greater good exists, individual needs not shared by others don't exist" - you know, uh, one of the main dimensions of variation that people in general are confused on. I dunno, it seems like the main thing wrong with rationalist culture is that it thinks of itself as rationalist, when in fact it's "just" another science-focused culture. shrug.
a lot of people around these parts undervalue academia's output and independent scholarship and reinvent a lot of stuff.
That's certainly my impression. I've been peeking in here off and on for several years, but became more active last June when I started (cross-)posting here and commenting a bit.
I have a PhD that's traditional in the sense that I learned to search, read, value, and cite the existing literature on a topic I'm working on. That seems to be missing here, leading, yes, to unnecessary reinvention. I recall reading a post several months ago that...
I mentioned it in the post https://www.lesswrong.com/posts/p2Qq4WWQnEokgjimy/respect-chesterton-schelling-fences : people are too eager to reject the Chesterton-Schelling fences because they feel enlightened and above the mundane guardrails that are for "normies".
I wrote about this here:
[T]his error strikes me as … emblematic of a certain common failure mode within the rationalist community (of which I count myself a part). This common failure mode is to over-value our own intelligence and under-value institutional knowledge (whether from the scientific community or the Amazon marketplace), and thus not feel the need to tread carefully when the two come into conflict.
In that comment and the resulting thread, we discuss the implications of that with respect to the rationalist community’s understanding of Alzheimer’s disease, a disease I’ve studied in great depth. I’ve mostly found the community to have very strong opinions on that subject and disdain for the scientific community studying it, but very superficial engagement with the relevant scientific literature. Every single time I’ve debated the matter in detail with someone (maybe 5–10 times total), I’ve persuaded them that 1) the scientific community has a much better understanding of the disease than they realized and 2) that the amyloid hypothesis is compelling as a causal explanation. However, people in the rationalist community often have strongly-held, wrong opinions before (or in lieu of) these debates with me.
Ironically, the same thing happened in that thread: my interlocutor, John Wentworth, appreciated my corrections. However, I ultimately found the discussion a bit unsatisfying, because I don’t know that he made any meta-updates from it concerning the level of confidence that he started with without having seriously engaged with the literature.
Basically, this is essentially reframing the overuse of the inside view and under using the outside view, and I think this struck truer to my objection than my answer did.
And yeah, John Wentworth ignored the literature and was wrong, and since John Wentworth admitted it was cherry picked, this is non-trivial evidence against the thesis that Goodhart is a serious problem for AI or humans.
Though it also calls into question how well John Wentworth's epistemic processes are working.
Limited number of groups and community events outside of the US/London (I'm from CEE, there are some groups, but not that many). It limits the possibility of in-person interaction. So, in a long-term, LW can only be my "online" community, not a "real life group of friends". Currently I regard EA events as a best way to meet rationalists, and, to be frank, it would be cool to have also other option and separate those two.
Wanted to write the same thing. In my country, if you organize a meetup for all rationalists and all ACX readers and all effective altruists together... the total number of participants may come close to ten, if you are lucky!
I'm going to focus on the overuse of the inside view, and the relative disuse of base rates and outside view. And it's why I think Eliezer's views on AI doom are probably not rational, and instead the product of a depression spiral, to quote John Maxwell.
On base rates of predictions of extinction, the obvious answer is that no extinction events happened out of 172 predicted ones, and while that's not enough of a sample to draw strong conclusions, it does imply that very high confidence in doom by a specific date is not very rational, unless you believe that you have something special that changes this factor.
Link is below:
https://en.m.wikipedia.org/wiki/List_of_dates_predicted_for_apocalyptic_events
The issue is that LWers generally assume that certain things are entirely new every time and that everything is special, and I think this assumption is overused in both LW and the broader world, which probably leads to the problem of overvaluing your own special inside view compared to others outside views.
This is not sound reasoning because of selection bias. If any of those predictions had been correct, you would not be here to see it. Thus, you cannot use their failure as evidence.
Off topic, but I'd just like to say this "good/bad comment" vs "I agree/disagree" voting distinction is amazing.
It allows us to separate our feeling on the content of the comment from our feeling on the appropriateness of the comment in the discussion. We can vote to disagree with a post without insulting the user for posting it. On reddit, this is sorely lacking, and it's one (of many) reasons every sub is an unproductive circle jerk.
I upvoted both of your comments, while also voting to disagree. Thanks for posting them. What a great innovation to stimulate discussion.
I think it's appropriate to draw some better lines through concept space for apocalyptic predictions, when determining a base rate, than just "here's an apocalyptic prediction and a date." They aren't all created equal.
Herbert W Armstrong is on this list 4 times... each time with a new incorrect prediction. So you're counting this guy who took 4 guesses, all wrong, as 4 independent samples on which we should form a base rate.
And by using this guy in the base rate, you're implying Eliezer's prediction is in the same general class as Armstrong's, which is a ...
In my paradigm, human minds are made of something I call "microcognitive elements", which are the "worker ants" or "worker bees" of the mind.
They are "primed"/tasked with certain high-level ideas and concepts, and try to "massage"/lubricate the mental gears into both using these concepts effectively (action/cognition) and to interpret things in terms of these concepts (perception)
The "differential" that is applied by microcognitive elements to make your models work, is not necessarily related to those models and may in fact be opposed to them (compensating for, or ignoring, the ways these models don't fit with the world)
Rationality is not necessarily about truth. Rationality is a "cognitive program" for the microcognitive elements. Some parts of the program may be "functionally"/"strategically"/"deliberately" framing things in deceptive ways, in order to have the program work better (for the kind of people it works for).
The specific disagreements I have with the "rationalist" culture:
All of these things have computational reasons, and are a part of the cognitive trade-offs the LW memeplex/hive-mind makes due to its "cognitive specialization". Nevertheless, I believe they are "wrong", in the sense that they lead to you having an incorrect map/model of reality, while strategically deceiving yourself into believing that you do have a correct model of reality. I also believe they are part of the reason we are currently losing - you are being rational, but you are not being rational enough.
Our current trajectory does not result in a winning outcome.
Since reading the sequences, I've made much more accurate predictions about the world.
Both the guiding principle of making beliefs pay rent in anticipated experience, as well as the tools by which to acquire those accurate beliefs, have worked for me.
So at an object level, I disagree with your claim. Also, if you're going to introduce topics like "meta-rationality" and "nebulosity" as part of your disagreement, you kind of have to defend them. You can't just link a word salad and expect people to engage. The first thing I'm looking for is a quick, one or two paragraph summary of the idea so I can decide whether it's worth it to pursue further.
(fyi, downvoted because while I think there's a good version of this question, the current one feels too vague to be about anything particularly good, and most version of this discussion seem more likely to be vaguely-self-flagellating or reverse-circle-jerky rather than useful)
I'm not sure that vagueness is a problem here. It could be useful to hear from people with various takes on what exactly the question is asking.
I do worry a little about the framing leading to contentiousness though and think the question would be improved by somehow trying to mitigate that.
Meta-note related to the question: asking this question here, now, means you're answer will be filtered for people who stuck around with capital r Rationality and the current LessWrong denizens, not the historical ones who have left the community. But I think that most of the interesting answers you'd get are from people who aren't here at all or rarely engage with the site due to the cultural changes over the last decade.
Yeah, I've been reading a lot of critiques by Benjamin Hoffman and thinking about some of the prior critiques by Jessica Taylor, and that's sort of what prompted me to ask this question. It would probably also be interesting to look at others who left it, they're just harder to get hold of.