My deconversion from Christianity had a large positive impact on my life. I suspect it had a small positive impact on the world, too. (For example, I no longer condemn gays or waste time and money on a relationship with an imaginary friend.) And my deconversion did not happen because I came to understand the Bayesian concept of evidence or Kolmogorov complexity or Solomonoff induction. I deconverted because I encountered some very basic arguments for non-belief, for example those in Dan Barker's Losing Faith in Faith.
Less Wrong has at least two goals. One goal is to raise the sanity waterline. If most people understood just the basics Occam's razor, what constitutes evidence and why, general trends of science, reductionism, and cognitive biases, the world would be greatly improved. Yudkowsky's upcoming books are aimed at this first goal of raising the sanity waterline. So are most of the sequences. So are learning-friendly posts like References & Resources for LessWrong.
A second goal is to attract some of the best human brains on the planet and make progress on issues related to the Friendly AI problem, the problem with the greatest leverage in the universe. I have suggested that Less Wrong would make faster progress toward this goal if it worked more directly with the community of scholars already tackling the exact same problems. I don't personally work toward this goal because I'm not mathematically sophisticated enough to do so, but I'm glad others are!
Still, I think the first goal could be more explicitly pursued. There are many people like myself and jwhendy who can be massively impacted for the better not by coming to a realization about algorithmic learning theory, but by coming to understand the basics of rationality like probability and the proper role of belief and reductionism.
Reasons for Less Wrong to devote more energy to the basics
- Such efforts to spread the basics will have a short-term impact on more people than will efforts toward Friendly AI, and these impacted people will in turn impact others, hopefully for the better.
- Some LWers may feel they have little to contribute because they aren't masters of Solomonoff induction or algorithmic learning theory. But they will be able to contribute to raising the sanity waterline by spreading the basics of rationality.
- Providing more basic resources will attract a wider base of readers to Less Wrong, leading to (1) more new rationalists and (2) more donations to SIAI, for solving the Friendly AI problem.
- Even for experienced rationalists, it can be easy to forget the basics at times. Humans are not naturally rational, and revert to pre-rationality rather quickly without ongoing training and practice.
How to do it
Let me put some meat on this. What does more focus on the basics look like? Here are some ideas:
- The sequences are great, but some people are too busy or lazy to read even those. Some of the sequences could be summarized into single posts crafted so as to have no prerequisites. These posts could be linked widely, and entered in relevant blog carnivals.
- There is another huge community who will watch a 10-minute video, but will not read a short post. So the YouTube lectures are a great idea. But they could be improved. As of today, three of the videos show a presenter against a whiteboard. To make this work well requires lots of resources: (1) a good camera, (2) a shotgun or lavavlier microphone, (3) a teleprompter, and (4) an experienced and enthusiastic presenter. That's hard to do! But videos in the familiar PowerPoint style or the Khan Academy style are easier to do well. All it requires is some free presentation software, free screen capture software and a $70 high-quality USB mic like the Blue Snowball. This approach would also allow more people to participate in making videos on the basics of rationality.
- Sometimes, a basic concept of rationality will only "click" with somebody if presented in a certain way. Some will need a story that illustrates the concept. There are some of these on Less Wrong already, or in something like Harry Potter and the Methods of Rationality, but there could be more. For others, perhaps a Cartoon Guide to Bayes' Theorem or a Cartoon Guide to Reductionism would make it "click." Those who are more ambitious might attempt to create an animation explaining some core rationalist concept, ala this visualization of special relativity.
- Write "Introduction to X" or "How to Use X" posts.
- Keep developing the wiki, obviously.
- Develop a rationality workbook.
I understand this, but want to add some comments/questions. I'm newer and am not exactly sure what differentiates top-level vs. discussion-area-appropriate posts. About only says this about the discussion area:
But the common understanding I find is that discussion = "meta" (perhaps as well as weaker/less-developed posts). Should the About section be clarified to reflect this? It seems that there are unofficially defined prescriptions floating around.
Would you clarify meta vs. non-meta. Is "meta" just concerned with suggestions about the LW site and the participants? If a post on raising the sanity waterline isn't meta, would this post, which suggests ways to do this, be considered meta? In other words, if Luke has presented some arguments for the "best rationally decided methods to help others become more rational at a basic level"... is that meta?
Lastly, for something like this topic which might imply action for those capable of writing content here and elsewhere to help noobs, I would consider the more experienced users to be the target audience. In other words, the post may be viewed as looking for teacher-level individuals to propagate LW content into several other formats in order to make rationality more accessible.
Given this, will a post like this receive adequate feedback/response from the "teacher-level" members if it is posted in the discussion area? If the simple answer is that most of those able to contribute to such an effort read the discussion area regularly, this is all the answer that is needed.
If that's not the case, however, could the discussion area be a black hole of sorts for a post like this?
To propose a possible solution for some of these points: define clear guidelines for top-level/discussion areas such that this post would have fallen under the discussion area definition. Then perhaps it could be moved to the top-level with enough voiced comments to do so?
My understanding is that Discussion is simply an area that can house a larger set of materials than can the main area of Less Wrong. It is in no way limited to meta-level discussions, but meta-level discussions are welcome there.
There's been a general request to keep meta-level discussions in the main area to a minimum, though not to zero. This request seems sensible to me. It would be nice to keep the main site full of posts that can actually help readers improve their rationality, with high signal to noise. And the Discussion area allows us to have m... (read more)