I've spent so much time in the cogsci literature that I know the LW approach to rationality is basically the mainstream cogsci approach to rationality (plus some extra stuff about, e.g., language), but... do other people not know this? Do people one step removed from LessWrong — say, in the 'atheist' and 'skeptic' communities — not know this? If this is causing credibility problems in our broader community, it'd be relatively easy to show people that Less Wrong is not, in fact, a "fringe" approach to rationality.
For example, here's Oaksford & Chater in the second chapter to the (excellent) new Oxford Handbook of Thinking and Reasoning, the one on normative systems of rationality:
Is it meaningful to attempt to develop a general theory of rationality at all? We might tentatively suggest that it is a prima facie sign of irrationality to believe in alien abduction, or to will a sports team to win in order to increase their chance of victory. But these views or actions might be entirely rational, given suitably nonstandard background beliefs about other alien activity and the general efficacy of psychic powers. Irrationality may, though, be ascribed if there is a clash between a particular belief or behavior and such background assumptions. Thus, a thorough-going physicalist may, perhaps, be accused of irrationality if she simultaneously believes in psychic powers. A theory of rationality cannot, therefore, be viewed as clarifying either what people should believe or how people should act—but it can determine whether beliefs and behaviors are compatible. Similarly, a theory of rational choice cannot determine whether it is rational to smoke or to exercise daily; but it might clarify whether a particular choice is compatible with other beliefs and choices.
From this viewpoint, normative theories can be viewed as clarifying conditions of consistency… Logic can be viewed as studying the notion of consistency over beliefs. Probability… studies consistency over degrees of belief. Rational choice theory studies the consistency of beliefs and values with choices.
They go on to clarify that by probability they mean Bayesian probability theory, and by rational choice theory they mean Bayesian decision theory. You'll get the same account in the textbooks on the cogsci of rationality, e.g. Thinking and Deciding or Rational Choice in an Uncertain World.
That depends a lot on what "Less Wrong rationality" is understood to denote.
There's a lot of stuff here I recognized as mainstream cogsci when I read it.
There's other stuff that I don't consider mainstream cogsci (e.g. cryonics advocacy, MWI advocacy, confident predictions of FOOMing AI). There's other stuff that drifts in between (e.g., the meta-ethics stuff is embedded in a fairly conventional framework, but comes to conclusions that are not clearly conventional.... though at times this seems more a fact about presentation than content).
I can accept the idea that some of that stuff is central to "LW rationality" and some of it isn't, but it's not at all obvious where one would draw the line.
Things about local rational behaviour look and feel OK and can be considered "LW rationality" in the meta sense. Comparison to cogsci seems to imply these parts.
I am not sure that cogsci ever says something even about large-scale aggregate consequentionalism (beating the undead horse, I doubt that any science can do something with dust specks and torture...).
And some applications of rationality (like trying to describe FOOM) seem to be too prior-dependent.
So no, it's not the methodology of rational decision making that is a problem.