One issue that has been discussed here before is whether Less Wrong is causing readers and participants to behave more rationally or is primarily a time-sink. I recently encountered an example that seemed worth pointing out to the community that suggested mixed results. The entry for Less Wrong on RationalkWiki says " In the outside world, the ugly manifests itself as LessWrong acolytes, minds freshly blown, metastasising to other sites, bringing the Good News for Modern Rationalists, without clearing their local jargon cache." RationalWiki has a variety of issues that I'm not going to discuss in detail here (such as a healthy of dose of motivated cognition pervading the entire project and having serious mind-killing problems) but this sentence should be a cause for concern. What they are essentially talking about is LWians not realizing (or not internalizing) that there's a serious problem of inferential distance between people who are familiar with many of the ideas here and people who are not. Since inferential distance is an issue that has been discussed here a lot, this suggests that some people who have read a lot here are not applying the lessons even when they are consciously talking about material related to those lessons. Of course, there's no easy way to tell how representative a sample this is, how common it is, and given RW's inclination to list every possible thing they don't like about something, no matter how small, this may not be a serious issue at all. But it did seem to be serious enough to point out here.
Those links are awesome. The outside perspective is interesting and the pages within LW that they link to as "what the place is like" are also interesting.
A while back there was a discussion thread about what the "target audience" size for the Less Wrong sequences was. Specifically, how many people on the planet should one realistically expect to be familiar with the mindset here after fully productive PR saturation had occurred, given that lots of people are missing at least the time, inclination, ability, or interest to give us more than 20 seconds work of eyeball, if that.
I commented there (perhaps using too many references to an analogous model in physics) that the people who are likely candidates for LW's content are not likely to exist as independent atoms but in social networks where they're already connected to each other to some degree.
If you drew the social network with different colored nodes for people who were "not our audience", "ready", and "aware" then strict criteria imply that the ready and aware people probably form "islands of rationality in a sea of disinterest".
We should expect that some islands of people who are potentially interested in LW have already formed themselves into social clusters and built their own jargon and set of habits and such. In this case they may have less need for LW in the first place...
And yet...
I'm pretty sure that Aumann's theorem (to the effect that individual rationalists should generally not agree to disagree because they can take each other's beleifs as evidence) could be applied to groups as well, just by redrawing the boundaries of one's "agents" in the appropriate way. So I'm curious what it would look like for two rationalist groups to try update on the evidence that each has independently accumulated, until they are in agreement.
When I wrote the earlier comment, I didn't even know that the rationalwiki existed, but now that I know about them, and know that they know about LW (to the degree of having an article about LW at least) I'm wondering what it would like if LW consciously tried to Aumann update as a group on the surprising content that they have accumulated. Also, what would be the reaction from the editors of their wiki?