I think the standard LW argument for there being only one morality is based on the psychological unity of mankind.
Having read the Meta-Ethics sequence, this is my belief too. Indeed, Elizier cares to index human-evaluation and pebblesorters-evaluation algorithms, by calling the first "morality" and the second "pebblesorting", but he is careful to avoid talking about Elizier-morality and MrMind-morality, or even Elizier-yesterday-morality and Elizier-today-morality.
Of course his aims were different, and compared to differently evolved aliens (or AI's) our morality is truly one of a kind.
But if we magnify our view on morality-space, I think it's impossible not to recognize that there are differences!
I think that this state of affair can be explained in this way: while there's a psychological unity of mankind, it concerns only very primitive aspects of our lives: the existence of joy, sadness, the importance of sex, etc.
But our innermost and basic evaluation algorithm doesn't cover every aspects of our lives, mainly because our culture poses problems too new for a genetic solution to have been spread to the whole population.
Thus ad-hoc solutions, derived from culture and circumstances, step in: justice, fairness, laws, and so on. Those solutions may very well vary in time and space, and our brains being what they are, sometimes they overwrite what should have been the most primitive output.
When we talk about morality, we are usually already assuming the most primitive basic facts about human evaluation algorithm, and we try to argue about the finer point not covered by the genetic wiring of our brains, as for example if murder is always wrong.
In comparison with pebble-sorters or clipping AI, humanity exhibits a very narrow way of evaluating reality, to the point that you can talk about a single human-algorithm and call it "morality". But if you zoom in, it is clear that the bedrock of morality doesn't cover every problems that cultures naturally throw at pepole, and that's why you need to invent "patches" or "add-ons" to the original algorithm, in form of morality concepts like justice, fairness, the sacrality of life, etc. Obviously, different groups of people will come up with different patches. But there are add-ons that were invented a long time ago, and they are now so widespread and ingrained in certain group's education, that they feel as if they are part of the original primitive morality, while infact they are not. There are also new problems that require the (sometimes urgent) invention of new patches (e.g.: nuclear proliferation, genetic manipulation, birth control), and they are even more problematic and still in a state of transition nowadays.
Is this view unitary, or even realist? In my opinion, philosophical distinctions are too crude and simplicistic to categorize correctly the view of morality as "algorithm + local patches". Maybe it needs its whole new category, something like the "algorithmic theories of morality" (although the category of "synthetic etical naturalism" comes close to capture the concept).
My meta-ethics are basically that of Luke's Pluralistic Moral Reductionism. (UPDATE: Elaborated in my Meta-ethics FAQ.)
However, I was curious as to whether this "Pluralistic Moral Reductionism" counts as moral realism or anti-realism. Luke's essay says it depends on what I mean by "moral realism". I see moral realism as broken down into three separate axes:
There's success theory, the part that I accept, which states that moral statements like "murder is wrong" do successfully refer to something real (in this case, a particular moral standard, like utilitarianism -- "murder is wrong" refers to "murder does not maximize happiness").
There's unitary theory, which I reject, that states there is only one "true" moral standard rather than hundreds of possible ones.
And then there's absolutism theory, which I reject, that states that the one true morality is rationally binding.
I don't know how many moral realists are on LessWrong, but I have a few questions for people who accept moral realism, especially unitary theory or absolutism theory. These are "generally seeking understanding and opposing points of view" kind of questions, not stumper questions designed to disprove or anything. While I'm doing some more reading on the topic, if you're into moral realism, you could help me out by sharing your perspective.
~
Why is there only one particular morality?
This goes right to the core of unitary theory -- that there is only one true theory of morality. But I must admit I'm dumbfounded at how any one particular theory of morality could be "the one true one", except in so far as someone personally chooses that theory over others based on preferences and desires.
So why is there only one particular morality? And what is the one true theory of morality? What makes this theory the one true one rather than others? How do we know there is only one particular theory? What's inadequate about all the other candidates?
~
Where does morality come from?
This gets me a bit more background knowledge, but what is the ontology of morality? Some concepts of moral realism have an idea of a "moral realm", while others reject this as needlessly queer and spooky. But essentially, what is grounding morality? Are moral facts contingent; could morality have been different? Is it possible to make it different in the future?
~
Why should we care about (your) morality?
I see rationality as talking about what best satisfies your pre-existing desires. But it's entirely possible that morality isn't desirable by someone at all. While I hope that society is prepared to coerce them into moral behavior (either through social or legal force), I don't think that their immoral behavior is necessarily irrational. And on some accounts, morality is independent of desire but still has rational force.
How does morality get it's ability to be rationally binding? If the very definition of "rationality" includes being moral, is that mere wordplay? Why should we accept this definition of rationality and not a different one?
I look forward to engaging in diologue with some moral realists. Same with moral anti-realists, I guess. After all, if moral realism is true, I want to know.