- Too much value and too positive feedback on legibility. Replacing smart illegible computations with dumb legible stuff
- Failing to develop actual rationality and focusing on cultivation of the rationalist memeplex or rationalist culture instead
- Not understanding the problems with the theoretical foundations on which sequences are based (confused formal understanding of humans -> confused advice)
Curious to see you elaborate on the last point, or just pointers to further reading. I think I agree in a betting sense (i.e. is Jan's claim true or false?) but don't really have a gears-level understanding.
I think the more general form of the emotions thing is: reductionism and "i can't understand it consciously therefore it's not rational".
The counter is deep respect for Chesterton's Fence.
This is also how many people get into woo
Although dual process theory has its issues, folks have talked about the failure mode of prioritizing System 2 over System 1. This is a thing that the type of person who's likely to become a rationalists is already predisposed to do, and rationality writing gives them lots of advice to prioritize S2 over S1 even more. And while S2 is extremely valuable, especially for the art of rationality, it can't function well unless it's integrate with S1 and S2 is used to operate feedback loops to train S1 towards rationality, as without S1 being inline S2 will always be disembodied.
The archetypal example is in the category of what folks might call the Reddit Nerd: someone who lives on the computer, seems really smart, but has little to no success in life. They don't actually get the things they want because they live in their head and don't know how to take effective action, so they retreat to online forums and games (board games, MMOs, etc.) where they can be achieve some measure of success without having to deal with S1.
Religiosity. Only talking to other rationalists , only reading rationalist approves material,treating senior rationalists as authority figures , rejecting critiques of rationalist thought out of hand.
Thinking too much about what your priors should be at the expense of actually learning about how the world is. Thinking in order to get better priors is tempting, but most priors you start with quickly get updated to be no different from each other.
Cultivating epistemic at the expense of instrumental rationality. They're both very important, but I think LessWrong has focused too much on the former. The explore-exploit tradeoff also applies to humans, not just to machine learning. Rationalists should be more agentic, applying what they've learned to the real world more than most seem to. Instead, cultivating too much doubt has broken our resolve to act.
I'm not sure your last sentence is true, mainly because selection bias: a fair proportion of the more instrumental folks are too busy actually doing work IRL to post frequently here anymore (e.g. Luke Muehlhauser, who I still sometimes think of as the author of posts like How to Beat Procrastination instead of his current role).
The two failure modes I observe most often are not exclusive to rationality, but might still be helpful to consider.
How do people fail to improve their rationality? How do they accidentally harm themselves in the process? I'm thinking of writing a post "How not to improve your rationality" or "A nuanced guide to reading the sequences" that preempts common mistakes, and I'd appreciate hearing people's experiences. Some examples: