Your title, "LessWrong usually assumes normativism about rationality; Elqayam & Evans (2011) argue against it" makes it sound like the authors disagree with LW. I don't think they do. They're pointing out some methodological problems with psychological research that involves measuring people's actual cognitive processes against norms of rationality, which have little to do with our use of normativism (i.e., using normative rationality to improve people's reasoning and decision making).
In the conclusion they specifically disclaim that they're arguing against our kind of normativism:
It is not our purpose to exclude normativism entirely from scientific endeavor. There is a need for research in education, planning, policy development and so on, in all of which norms play a crucial role. The Meliorist position is a strong case in point, both the version advocated so powerfully by the individual differences research program of Stanovich and West (2000; Stanovich, 1999; 2004; 2009b), and the version put forward by Baron (e.g., 2008). Such authors wish to find ways improve people’s reasoning and decision-making and therefore require some standard definition of what it means to be rational.
I think they are also not saying that human thinking and decisions can't be measured against normative models. My understanding is that they are suggesting that doing so makes it easy for several fallacies and biases to sneak into one's research, so it's a bad idea in practice for someone trying to find out how humans actually think.
From this description, they are cautioning against treating the is-brain as the should-brain plus a diff.
The paper was better than I expected. Part of that is that I misunderstood what was meant by "normativism" - they actually excluded instrumental rationality, defined as "Behaving in such a way as to achieve one’s personal goals."
If we pull the now-possibly-standard LW trick of defining "ought" as "me::ought," suddenly we're all talking about instrumental rationality. There is some trouble because how we extract preferences from human brains is not fully determined, but that at least is a more "meta" level of normativism.
I really like the distinction they draw between 'empirical logicism' (believing that thinking reflects some internalised form of classical logic) and 'prescriptive logicism' (believing that thinking should be measured against logic and evaluated on how closely it conforms). Not to say they have a point at all (I haven't read far enough to decide) but that distinction is going to be really useful in explaining parts of rationality - "I don't think human brains work this way; I think they should, though".
I can't finish this paper since it seems fairly confused. I'd just point out that the paper's arguments, being motivated by "what's necessary for research", are irrelevant. It doesn't particularly matter that researchers have a normative system, so long as they don't have a preconception about the depth of adherence. For example, if my normative system damns witches ans harlots, I don't really have a problem for research: I know how many witches and harlots there are, I also think they're bad people. In fact, I might think society is very much in trouble because of their number; so long as I don't engage in wishful thinking about social composition, this fact changes nothing.
Secondly, the argument ghat we can't arbitrate normative standards is silly. Since all of them have to be implemented physically and have calculable products, we can always guarantee that rationality is at least a meta standard via a simulation argument.
Frontiers in Psychology: Cognitive Science has a special issue which extends some of these arguments: http://journal.frontiersin.org/ResearchTopic/1185
A forthcoming edition of Behavioral and Brain Sciences will be devoted to Elqayam & Evans' (2011) critique of normativism about rationality and brief responses to it.
Abstract: