I lurk LessWrong and am grappling with a perceived misalignment between its stated goals—improving reasoning and decision-making—and the type of content often shared. I am not referring to content that I disagree with, or content that I think is poorly written, nor am I asking people to show me their hero license. I'm referring to a style of writing that is common in the rationalist blogosphere, it often has a surprising conclusion and draws from multiple domains to answer questions. Popular examples of people who write posts in this way include Scott Alexander, Robin Hanson, johnwentsworth, gwern, etc.[1] While this style of writing is fascinating and often enlightening, I wonder how much it genuinely improves reasoning or helps one be less wrong about the world. The primary goal of these kinds of posts do not seem to be to help you achieve these goals, or at the very least, they seem less efficient than other methods. Is there an implicit divide between "fun" posts on LessWrong and more productive ones?
I suspect there's a broader discourse that I may have missed despite my efforts to answer my own question before asking. If this post is repetitive or misaligned with community norms, I apologize. Thank you for the sanity check to those that respond.
- ^
This small sample of authors obviously have very different styles and interests, not to mention that many of their posts can be thought of as belonging to a completely different category than "rationalist blogosphere" My grouping of this kind of writing and philosophy is based off of vibes, take that how you will.
Thank you for your response. On reflection, I realize my original question was unclear. At its core is an intuition about the limits of critical thinking for the average person. If this intuition is valid, I believe some members of the community should, rationally, behave differently. While this kind of perspective doesn't seem uncommon, I feel its implications may not be fully considered. I also didn’t realize how much this intuition influenced my thinking when writing the question. My thoughts on this are still unclear, and I remain uncertain about some of the underlying assumptions, so I won’t argue for it here.
Apologies for the confusion. I no longer endorse my question.