The core of my argument is: try to select as much as possible on what you care about (ability and desire to contribute and learn from lesswrong 2.0) and as little as possible on stuff that's not so important (e.g. do they get references to hpmor). And do testing to work out how best to achieve this.
By intellectual community I wasn't meaning 'high status subculture', I was trying to get across the idea of a community that selects on people's ability to make intellectual contributions, rather than fit in to a culture. Science is somewhat like this, although as you say there is a culture of academic science which makes it more subculture-like. stackoverflow might be a better example.
I'm not hoping that lesswrong 2.0 will accumulate money and prestige, I'm hoping that it will make intellectual progress needed for solving the world's most important problems. But I think this aim would be better served if it attracted a wide range of people who are both capable and aligned with its aims.
"From my point of view, you are proposing to destroy something I like which has been somewhat useful in the hopes of creating a community which might not happen."
I think a good argument against my position is that projects need to focus quite narrowly, and it makes sense to focus on the existing community given that it's also already produced good stuff.
Hopefully that's the justification that the project leaders have in mind, rather than them focusing on the current rationality community because they think that there aren't many people outside of it who could make valuable contributions.
"I think communities form because people discover they share a desire"
I agree with this, but would add that it's possible for people to share a desire with a community but not want to join it because there are aspects of the community that they don't like.
"Is there something they want to do which would be better served by having a rationality community that suits them better than the communities they've got already?"
That's something I'd like to know. But I think it's important for the rationality community to attempt to serve these kinds of people both because these people are important for the goals of the rationality community and because they will probably have useful ideas to contribute. If the rationality community is largely made up of programmers, mathematicians, and philosophers, it's going to be difficult for it to solve some of the world's most important problems.
Perhaps we have different goals in mind for lesswrong 2.0. I'm thinking of it as a place to further thinking on rationality and existential risk, where the contributors are anyone who both cares about those goals and is able to make a good contribution. But you might have a more specific goal: a place to further thinking on rationality and existential risk, but targeted specifically at the current rationality community so as to make better use of the capable people within it. If you had the second goal in mind then you'd care less about appealing to audiences outside of the community.
You're mainly arguing against my point about weirdness, which I think was less important than my point about user testing with people outside of the community. Perhaps I could have argued more clearly: the thing I'm most concerned about is that you're building lesswrong 2.0 for the current rationality community rather than thinking about what kinds of people you want to be contributing to it and learning from it and building it for them. So it seems important to do some user interviews with people outside of the community who you'd like to join it.
On the weirdness point: maybe it's useful to distinguish between two meanings of 'rationality community'. One meaning is the intellectual of community of people who further the art of rationality. Another meaning is more of a cultural community: a set of people who know each other as friends, have similar lifestyles and hobbies, like the same kinds of fiction, in jokes, etc. I'm concerned that less wrong 2.0 will select for people who want to join the cultural community, rather than people who want to join the intellectual community. But the intellectual community seems much more important. This then gives us two types of weirdness: weirdness that comes out of the intellectual content of the community is important to keep - ideas such as existential risk fit in here. Weirdness that comes more out of the cultural community seems unnecessary - such as references to HPMOR.
We can make an analogy with science here: scientists come from a wide range of cultural, political, and religious backgrounds. They come together to do science, and are selected on their ability to do science, not their desire to fit into a subculture. I'd like to see lesswrong 2.0 to be more like this, i.e. an intellectual community rather than a subculture.
Have you done user interviews and testing with people who it would be valuable to have contribute, but who are not currently in the rationalist community? I'm thinking people who are important for existential risk and/or rationality such as: psychologists, senior political advisers, national security people, and synthetic biologists. I'd also include people in the effective altruism community, especially as some effective altruists have a low opinion of the rationalist community despite our goals being aligned.
You should just test this empirically, but here are some vague ideas for how you could increase the credibility of the site to these people:
What about asking your audience questions?
For example, you could ask questions:
* Seeking criticism, such as "I think section x is the weakest part, what are some alternative arguments?"
* Promoting understanding, such as "Can you think of 2 more examples of <concept I just introduced>?"
* Stimulating research, such as "I think this model can be applied to area y, does anyone have suggestions for how to do this?"
This might help get readers out of passive consumption mode, and into thinking about something they could comment about. It would also make the writing more useful.