I recently heard about SIAI's Rationality Minicamp and thought it sounded cool, but for logistical/expense reasons I won't be going to one.
There are probably lots of people who are interested in improving their instrumental rationality, know about and like LessWrong, but haven't read the vast majority of content because there is just so much material, and the practical payoff is uncertain.
It would be cool if it was much easier for people to find the highest ROI material on LessWrong.
My rough idea for how this new instrumental rationality tool might work:
- It starts off as a simple wiki focused on instrumental rationality. People only add things to the wiki (often just links to existing LessWrong articles) if they have tried them and found them very useful for achieving their goals.
- People are encouraged to add "exercises" that help you develop the skill represented by the article, of the type that are presumably done at the Rationality Minicamps.
- Only people who have tried the specific thing in question should add comments about their experiences with it.
- Long Term Goal: Every LessWrong user can define their own private stack rank of the most important concepts/techniques/habits for instrumental rationality. These stack ranks are globally merged by some LessWrong software to create an overall stack rank of the highest ROI ideas/behaviors/techniques as judged by the LessWrong community at any given time. People looking to improve their instrumental rationality can then just visit this global stack rank and pick the highest item that they haven't tried yet to experiment with, and work backwards from there if there are any prerequisites.
Do you think others would find this useful? Anyone have suggested improvements?
Usefulness of some techniques may depend on domain where one wants to use them. For example a technique "if you don't know something, use google and follow 3 highest links" depends on whether your problems are described on internet, and how much trustworthy are those answers. -- For "how do I join two strings in Python?" this technique works great. For financial questions, not so great, because website owners have a huge incentive to promote wrong answers.
Also, the same technique may have different results for different kinds of people, because of their environment, previous knowledge, personality, gender, social class, financial situation, or whatever. If you omit those details, you only get average results in general population, which is also not bad, but does not lead to optimal choice.
Measuring an impact of a technique is difficult. How much sure are you it was this technique that helped, and not something else? Maybe it was a placebo effect or just a coincidence. If we had hundreds of data points, the coincidences would average out, but we probably won't have so much data.