treat moral judgments as beliefs, make is-ought mistakes, argue against non-consequentialism
A few examples could help me understand what you mean, because right now I don't have a clue.
expect morality to be describable in terms of a coherent and consistent set of rules instead of an ugly mess of evolved heuristics
I guess the goal is to simplify the mess as much as possible, but not more. To find a smallest set of rules that would generate a similar result.
Do you believe in an objective morality capable of being scientifically investigated (a la Sam Harris *or others*), or are you a moral nihilist/relativist? There seems to be some division on this point. I would have thought Less Wrong to be well in the former camp.
Edit: There seems to be some confusion - when I say "an objective morality capable of being scientifically investigated (a la Sam Harris *or others*)" - I do NOT mean something like a "one true, universal, metaphysical morality for all mind-designs" like the Socratic/Platonic Form of Good or any such nonsense. I just mean something in reality that's mind-independent - in the sense that it is hard-wired, e.g. by evolution, and thus independent/prior to any later knowledge or cognitive content - and thus can be investigated scientifically. It is a definite "is" from which we can make true "ought" statements relative to that "is". See drethelin's comment and my analysis of Clippy.