I don't know how to have a discussion where the answer to the question "show me how it might be" is "First of all I said [it] might be."
You didn't say "show me how [it might be]", you said "show me how [it is]"
So you already know that there are no such statements that "everybody" agrees to.
Most people that aren't moral realists still have moral intuitions, you're confusing the categorization of beliefs about the nature of morality vs the actual moral instinct in people's brains. The moral instinct doesn't concern itself with whether morality is real; eyes don't concern themselves with viewing themselves; few algorithms altogether are are designed to analyze themselves.
As for moral nihilists, assuming they exist, an empty moral set can indeed never be transformed into anything else via is statements, which is why I specified from the very beginning "every person equipped with moral instinct".
If you are able to state that you are talking about something which has no connection to the real world,
The "connection to the real world" is that the vast majority of seeming differences in human moralities seem to derive from different understandings of the worlds, and different expectations about the consequences. When people share agreement about the "is", they also tend to converge on the "ought", and they most definitely converge on lots of things that "oughtn't". Seemingly different morality sets gets transformed to look like each other.
That's sort of like the CEV of humanity that Eliezer talks about, except that I talk about a much more limited set -- not the complete volition (which includes things like "I want to have fun"), but just the moral intuition system.
That's a "connection to the real world" that relates to the whole history of mankind, and to how beliefs and moral injuctions connect to one another; how beliefs are manipulated to produce injuctions, how injuctions lose their power when beliefs fall away.
Now with a proper debater that didn't just seek to heap insults on people I might discuss further on nuances and details-- whether it's only consequentialists that would get attractive moral sets, whether different species would get mostly different attractive moral sets, whether such attractive moral sets may be said to exist because anything too alien would probably not even be recognizable as morality by us; possible exceptions for deliberately-designed malicious minds, etc...
But you've just been a bloody jerk throughout this thread, a horrible horrible person who insults and insults and insults some more. So I'm done with you: feel free to have the last word.
I was surprised to see the high number of moral realists on Less Wrong, so I thought I would bring up a (probably unoriginal) point that occurred to me a while ago.
Let's say that all your thoughts either seem factual or fictional. Memories seem factual, stories seem fictional. Dreams seem factual, daydreams seem fictional (though they might seem factual if you're a compulsive fantasizer). Although the things that seem factual match up reasonably well to the things that actually are factional, this isn't the case axiomatically. If deviating from this pattern is adaptive, evolution will select for it. This could result in situations like: the rule that pieces move diagonally in checkers seems fictional, while the rule that you can't kill people seems factual, even though they're both just conventions. (Yes, the rule that you can't kill people is a very good convention, and it makes sense to have heavy default punishments for breaking it. But I don't think it's different in kind from the rule that you must move diagonally in checkers.)
I'm not an expert, but it definitely seems as though this could actually be the case. Humans are fairly conformist social animals, and it seems plausible that evolution would've selected for taking the rules seriously, even if it meant using the fact-processing system for things that were really just conventions.
Another spin on this: We could see philosophy as the discipline of measuring, collating, and making internally consistent our intuitions on various philosophical issues. Katja Grace has suggested that the measurement of philosophical intuitions may be corrupted by the desire to signal on the part of the philosophy enthusiasts. Could evolutionary pressure be an additional source of corruption? Taking this idea even further, what do our intuitions amount to at all aside from a composite of evolved and encultured notions? If we're talking about a question of fact, one can overcome evolution/enculturation by improving one's model of the world, performing experiments, etc. (I was encultured to believe in God by my parents. God didn't drop proverbial bowling balls from the sky when I prayed for them, so I eventually noticed the contradiction in my model and deconverted. It wasn't trivial--there was a high degree of enculturation to overcome.) But if the question has no basis in fact, like the question of whether morals are "real", then genes and enculturation will wholly determine your answer to it. Right?
Yes, you can think about your moral intuitions, weigh them against each other, and make them internally consistent. But this is kind of like trying to add resolution back in to an extremely pixelated photo--just because it's no longer obviously "wrong" doesn't guarantee that it's "right". And there's the possibility of path-dependence--the parts of the photo you try to improve initially could have a very significant effect on the final product. Even if you think you're willing to discard your initial philosophical conclusions, there's still the possibility of accidentally destroying your initial intuitional data or enculturing yourself with your early results.
To avoid this possibility of path-dependence, you could carefully document your initial intuitions, pursue lots of different paths to making them consistent in parallel, and maybe even choose a "best match". But it's not obvious to me that your initial mix of evolved and encultured values even deserves this preferential treatment.
Currently, I disagree with what seems to be the prevailing view on Less Wrong that achieving a Really Good Consistent Match for our morality is Really Darn Important. I'm not sure that randomness from evolution and enculturation should be treated differently from random factors in the intuition-squaring process. It's randomness all the way through either way, right? The main reason "bad" consistent matches are considered so "bad", I suspect, is that they engender cognitive dissonance (e.g. maybe my current ethics says I should hack Osama Bin Laden to death in his sleep with a knife if I get the chance, but this is an extremely bad match for my evolved/encultured intuitions, so I experience a ton of cognitive dissonance actually doing this). But cognitive dissonance seems to me like just another aversive experience to factor in to my utility calculations.
Now that you've read this, maybe your intuition has changed and you're a moral anti-realist. But in what sense has your intuition "improved" or become more accurate?
I really have zero expertise on any of this, so if you have relevant links please share them. But also, who's to say that matters? In what sense could philosophers have "better" philosophical intuition? The only way I can think of for theirs to be "better" is if they've seen a larger part of the landscape of philosophical questions, and are therefore better equipped to build consistent philosophical models (example).