I am in the same boat as VirtualAdept. I've been following luke's blog forever which naturally lead me to LW, and whilst I'm still very in my Rational Infancy, I am hugely grateful for the gradual steps toward maturity that this site affords me, so I think I was due to register. Problem is it also makes me more and more frustrated at the lack of rationality surrounding me, especially as an exchristian with a very devout evangelical family and friend network. I mean I'm no Eliezer, but some of these people....
to have content that is accesible, interesting and easily sharable would be fantastic, because the type of people who need these lessons the most are the type of people who will give up at the slightest difficulty.
So my first step is attempting to be more active in this site, then start 'evangelizing' the more accesible parts on mediums like twitter and facebook. after that....the world!
but we'll see....
Mega kudos to Lukeprog and www.lesswrong.com!!
My deconversion from Christianity had a large positive impact on my life. I suspect it had a small positive impact on the world, too. (For example, I no longer condemn gays or waste time and money on a relationship with an imaginary friend.) And my deconversion did not happen because I came to understand the Bayesian concept of evidence or Kolmogorov complexity or Solomonoff induction. I deconverted because I encountered some very basic arguments for non-belief, for example those in Dan Barker's Losing Faith in Faith.
Less Wrong has at least two goals. One goal is to raise the sanity waterline. If most people understood just the basics Occam's razor, what constitutes evidence and why, general trends of science, reductionism, and cognitive biases, the world would be greatly improved. Yudkowsky's upcoming books are aimed at this first goal of raising the sanity waterline. So are most of the sequences. So are learning-friendly posts like References & Resources for LessWrong.
A second goal is to attract some of the best human brains on the planet and make progress on issues related to the Friendly AI problem, the problem with the greatest leverage in the universe. I have suggested that Less Wrong would make faster progress toward this goal if it worked more directly with the community of scholars already tackling the exact same problems. I don't personally work toward this goal because I'm not mathematically sophisticated enough to do so, but I'm glad others are!
Still, I think the first goal could be more explicitly pursued. There are many people like myself and jwhendy who can be massively impacted for the better not by coming to a realization about algorithmic learning theory, but by coming to understand the basics of rationality like probability and the proper role of belief and reductionism.
Reasons for Less Wrong to devote more energy to the basics
How to do it
Let me put some meat on this. What does more focus on the basics look like? Here are some ideas: