I like how pragmatic you're being. I am new here, but one of the things that attracted me to this site was the fact that much of the material is simply above my head. That's hard to find in informal public online communities outside of academia, and I feel that the very challenge of trying to wrap my head difficult material is an absolute necessity for keeping my math and statistics skills sharp. However, different people have different bars that they want to reach, and I do agree that more accessible material is a great idea. As for me, I have a voice for radio and a knack for stating difficult theories in an accessible way, so I think a good microphone will be my next purchase for my computer. Making a Youtube video or two on rationality would be a great way for me to contribute to this goal.
I know you can't see me because this is only a text comment, but I am right now really giving you a double thumbs up for this idea.
My deconversion from Christianity had a large positive impact on my life. I suspect it had a small positive impact on the world, too. (For example, I no longer condemn gays or waste time and money on a relationship with an imaginary friend.) And my deconversion did not happen because I came to understand the Bayesian concept of evidence or Kolmogorov complexity or Solomonoff induction. I deconverted because I encountered some very basic arguments for non-belief, for example those in Dan Barker's Losing Faith in Faith.
Less Wrong has at least two goals. One goal is to raise the sanity waterline. If most people understood just the basics Occam's razor, what constitutes evidence and why, general trends of science, reductionism, and cognitive biases, the world would be greatly improved. Yudkowsky's upcoming books are aimed at this first goal of raising the sanity waterline. So are most of the sequences. So are learning-friendly posts like References & Resources for LessWrong.
A second goal is to attract some of the best human brains on the planet and make progress on issues related to the Friendly AI problem, the problem with the greatest leverage in the universe. I have suggested that Less Wrong would make faster progress toward this goal if it worked more directly with the community of scholars already tackling the exact same problems. I don't personally work toward this goal because I'm not mathematically sophisticated enough to do so, but I'm glad others are!
Still, I think the first goal could be more explicitly pursued. There are many people like myself and jwhendy who can be massively impacted for the better not by coming to a realization about algorithmic learning theory, but by coming to understand the basics of rationality like probability and the proper role of belief and reductionism.
Reasons for Less Wrong to devote more energy to the basics
How to do it
Let me put some meat on this. What does more focus on the basics look like? Here are some ideas: