My meta-ethics are basically that of Luke's Pluralistic Moral Reductionism. (UPDATE: Elaborated in my Meta-ethics FAQ.)
However, I was curious as to whether this "Pluralistic Moral Reductionism" counts as moral realism or anti-realism. Luke's essay says it depends on what I mean by "moral realism". I see moral realism as broken down into three separate axes:
There's success theory, the part that I accept, which states that moral statements like "murder is wrong" do successfully refer to something real (in this case, a particular moral standard, like utilitarianism -- "murder is wrong" refers to "murder does not maximize happiness").
There's unitary theory, which I reject, that states there is only one "true" moral standard rather than hundreds of possible ones.
And then there's absolutism theory, which I reject, that states that the one true morality is rationally binding.
I don't know how many moral realists are on LessWrong, but I have a few questions for people who accept moral realism, especially unitary theory or absolutism theory. These are "generally seeking understanding and opposing points of view" kind of questions, not stumper questions designed to disprove or anything. While I'm doing some more reading on the topic, if you're into moral realism, you could help me out by sharing your perspective.
~
Why is there only one particular morality?
This goes right to the core of unitary theory -- that there is only one true theory of morality. But I must admit I'm dumbfounded at how any one particular theory of morality could be "the one true one", except in so far as someone personally chooses that theory over others based on preferences and desires.
So why is there only one particular morality? And what is the one true theory of morality? What makes this theory the one true one rather than others? How do we know there is only one particular theory? What's inadequate about all the other candidates?
~
Where does morality come from?
This gets me a bit more background knowledge, but what is the ontology of morality? Some concepts of moral realism have an idea of a "moral realm", while others reject this as needlessly queer and spooky. But essentially, what is grounding morality? Are moral facts contingent; could morality have been different? Is it possible to make it different in the future?
~
Why should we care about (your) morality?
I see rationality as talking about what best satisfies your pre-existing desires. But it's entirely possible that morality isn't desirable by someone at all. While I hope that society is prepared to coerce them into moral behavior (either through social or legal force), I don't think that their immoral behavior is necessarily irrational. And on some accounts, morality is independent of desire but still has rational force.
How does morality get it's ability to be rationally binding? If the very definition of "rationality" includes being moral, is that mere wordplay? Why should we accept this definition of rationality and not a different one?
I look forward to engaging in diologue with some moral realists. Same with moral anti-realists, I guess. After all, if moral realism is true, I want to know.
"Not quite anything, since the size and complexity of conscious thought is bounded by the human brain. But that is not relevant to this discussion of ethics."
Indeed, conscious experience may be bound by the size and complexity of brains or similar machinery, of humans, other animals, and cyborgs. Theoretically, conscious perceptions may be able to be anything (or nearly), as we could theorize about brains the size of Jupiter or much larger. You get the point.
"Should I interpret this as you defining ethics as good and bad feelings?"
Almost. Not ethics, but ethical value in a direct, ultimate sense. There is also indirect value, which is things that can lead to direct value, which are myriad, and ethics is much more than defining value, it comprises laws, decision theory, heuristics, empirical research, and many theoretical considerations. I'm aware that Elizer has written a post on Less Wrong saying that ethical value is not on happiness alone. Although happiness alone is not my proposition, I find his post on the topic quite poorly developed, and really not an advisable read.
"So, do you endorse wireheading?"
This depends very much on the context. All else being equal, wireheading could be good for some people, depending on the implications of it. However, all else seems hardly equal in this case. People seem to have a diverse spectrum of good feelings that may not be covered by the wireheading (such as love, some types of physical pleasure, good smell and taste, and many others), and the wireheading might prevent people from being functional and acting in order to increase ethical value in the long-term, so as to possibly deny its benefits. I see wireheading, in the sense of artificial paradise simulations, as a possibly desirable condition in a rather distant future of ideal development and post-scarcity, though.