Socially helpful. For the rest, I'll claim you're making an error, although entirely my fault. I'm claiming there's a type of article you should be citing in your intros. Luke reads a summary of those articles and says, "Wow, do people think we're weird, we're so mainstream". I say "Most of you think you're weird, and you probably did too till recenrlt; it's on you to know where you're mainstream". You say "Dummy, he's clearly interested in that since he already mentioned the first example of it."
Fair description? No, of course not, Luke has read more than a summary. He's read stuff. Mainstream stuff?
Anyway, I shouldn't say I have a good knowledge of the precise thing i mean by "mainstream" in your field, but I meant something pretty specific:
Very recent Research article Not-self or buddies Not too general Not just "classic" Similar methodology or aims (I mean extremely similar except in a few ways) High impact one way or another
What if you're too novel for to come up with articles meeting all those criteria. There's an answer for that.
Probably I should be clearer.
"LessWrong should have a one-page answer for the question: What part of the established literature are you building upon and what are you doing that is novel?"
is not even close to the same as
"suggests mentioning the most popular example of an idea in an article's um, history of an idea section" if you consider "Oxford Handbook of Thinking and Reasoning" to be "the first example of it"
Anyway, probably different in philosophy, so I'll retract my claim. I've never seen any good introduction...
I've spent so much time in the cogsci literature that I know the LW approach to rationality is basically the mainstream cogsci approach to rationality (plus some extra stuff about, e.g., language), but... do other people not know this? Do people one step removed from LessWrong — say, in the 'atheist' and 'skeptic' communities — not know this? If this is causing credibility problems in our broader community, it'd be relatively easy to show people that Less Wrong is not, in fact, a "fringe" approach to rationality.
For example, here's Oaksford & Chater in the second chapter to the (excellent) new Oxford Handbook of Thinking and Reasoning, the one on normative systems of rationality:
Is it meaningful to attempt to develop a general theory of rationality at all? We might tentatively suggest that it is a prima facie sign of irrationality to believe in alien abduction, or to will a sports team to win in order to increase their chance of victory. But these views or actions might be entirely rational, given suitably nonstandard background beliefs about other alien activity and the general efficacy of psychic powers. Irrationality may, though, be ascribed if there is a clash between a particular belief or behavior and such background assumptions. Thus, a thorough-going physicalist may, perhaps, be accused of irrationality if she simultaneously believes in psychic powers. A theory of rationality cannot, therefore, be viewed as clarifying either what people should believe or how people should act—but it can determine whether beliefs and behaviors are compatible. Similarly, a theory of rational choice cannot determine whether it is rational to smoke or to exercise daily; but it might clarify whether a particular choice is compatible with other beliefs and choices.
From this viewpoint, normative theories can be viewed as clarifying conditions of consistency… Logic can be viewed as studying the notion of consistency over beliefs. Probability… studies consistency over degrees of belief. Rational choice theory studies the consistency of beliefs and values with choices.
They go on to clarify that by probability they mean Bayesian probability theory, and by rational choice theory they mean Bayesian decision theory. You'll get the same account in the textbooks on the cogsci of rationality, e.g. Thinking and Deciding or Rational Choice in an Uncertain World.