Motivation for this page, and intended use:
This is a bad first draft, and may be drastically revised. But, as a starting vision, I'm hoping these pages will do three things:
- Provide a good landing page for newcomers. Let newcomers see what we’re up to, why a person might care about it, and which past posts can help them with which of their own questions.
- Provide a good review, or summary of what has been discussed or figured out where, for current community members who’d like a better birds-eye picture of our project.
- Provide a list of "open problems" and "articles someone should write", together with a picture of how progress on those problems would contribute to LW's project, and an index of progress to date. So that would-be authors can see useful avenues to contribute, and so that some portion of LW posts can visibly contribute to a useful, cummulative project, instead of being like a succession of randomly entertaining newspaper articles that no one much cares about afterwards. (The idea is not that LW posts should do this, but that it would be nice if some portion did, and if LW readers ended up building some cummulative competencies over time.)
Actual list of questions
- How accurate are peoples’ current beliefs (e.g. about themselves, about their immediate social environment or careers, and about the larger world)?
- a. What specific biases or error patterns interfere?
- b. To what extent do people, and people in various sub-populations, aim for accuracy in their beliefs?
- c. How accurate are most peoples’ beliefs, overall? How accurate are the beliefs of relevant subpopulations, e.g., scientists, or avid LW-users, or people who believe they "actually try"?
- How can we measure our own rationality (our own tendency to form accurate beliefs in varied domains, controlling for domain-knowledge and intelligence), and the rationality of other individuals and groups?
- What practical techniques can improve the accuracy of individuals’ beliefs?
- What practical techniques can improve the accuracy of groups’ beliefs?
- Disagreements on Less Wrong.
- Foundational questions: How is it that people can form accurate beliefs at all? How would an ideal accurate belief-former form its beliefs? Would such a belief-former use probabilities? What are probabilities? Where do priors come from?
Questions about achieving one's goals:
- To what extent does forming more accurate beliefs tend to help people achieve happiness, positive social relationships, income, longevity, actually useful philanthropy, or other goals?
- What techniques other than improving the accuracy of our beliefs, can help us achieve important goals?
- Foundational questions: What do humans really care about, and what formalisms can help us descriptively or normatively model human concerns? Does it make sense to discuss value as distinct from human preferences? How would an ideal goal-maximizer think? What other foundations do we need, to think non-confusedly about the grand unified problem of What To Do?
See also LessWrong Wiki:Community Portal, Wiki Projects, Wiki Priorities