Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Evan_Gaensbauer comments on On the importance of Less Wrong, or another single conversational locus - Less Wrong

84 Post author: AnnaSalamon 27 November 2016 05:13PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (362)

You are viewing a single comment's thread. Show more comments above.

Comment author: Evan_Gaensbauer 28 November 2016 05:23:53AM 6 points [-]

I think this inspired Patri's Self-Improvement or Shiny Distraction post. Like video games, Less Wrong can be distracting... so if video games are a distracting waste of time, Less Wrong must also be, right?

I've been thinking about Patri's post for a long time, because I've found the question puzzling. The friends of mine who feel similar to Patri then are ones who look to rationality as a tool for effective egoism/self-care, entrepreneurship insights, and lifehacks. They're focused on individual rationality, and improved heuristics for improving things in their own life fast. Doing things by yourself allows for quicker decision-making and tighter feedback loops. It's easier to tell if what you're doing works sooner.

That's often referred to as instrumental rationality, and that the Sequences tended to focus more on epistemic rationality. But I think a lot of what Eliezer wrote about how to create a rational community which can go on form to project teams and build intellectual movements was instrumental rationality. It's just taken longer to tell if that's succeeded.

Patri's post was written in 2010. A lot has changed since then. The Future of Life Institute (FLI) is an organization which is responsible along with Superintelligence for boosting AI safety to the mainstream. FLI was founded by community members whose meeting originated on LessWrong, so that's value added to advancing AI safety that wouldn't have existed if LW never started. CFAR didn't exist in 2010. Effective altruism (EA) has blown up, and I think LW doesn't get enough credit for generating the meme pool which spawned it. Whatever one thinks of EA, it has achieved measurable progress on its own goals like how much money is moved not only through Givewell, but by a foundation with an endowment over $9 billion.

What I've read is the LW community aspiring to do better than science is currently done in new ways, or to apply rationality to new domains and make headway on your goals. Impressive progress has been made on many community goals.