lukeprog gave a list of metaethics questions here:
What does moral language mean? Do moral facts exist? If so, what are they like, and are they reducible to natural facts? How can we know whether moral judgments are true or false? Is there a connection between making a moral judgment and being motivated to abide by it? Are moral judgments objective or subjective, relative or absolute? Does it make sense to talk about moral progress?
Most of these questions make no sense to me. I imagine that the moral intuitions in my brain come from a special black box within it, a "morality core" whose outputs I cannot easily change. (Explaining how my "morality core" ended up a certain way is a task for evo psych, not philosophy.) Or I can be more enlightened and adopt Nesov's idea that the "morality core" doesn't exist as a unified device, only as an umbrella name for all the diverse "reasons for action" that my brain can fire. Either perspective can be implemented as a computer program pretty easily, so I don't feel there's any philosophical mystery left over. All we have is factual questions about how people's "morality cores" vary in time and from person to person, how compelling their voices are, finding patterns in their outputs, etc. Can someone explain what problem metaethics is supposed to solve?
You said most of those questions make no sense to you so I tried to make sense of them myself and thought I could as well write down my thoughts.
Regarding your own questions. I believe that there are some genetically hard-coded intuitions about how to approach and respond to other primates. Why would we want to wrap that into some confusing terminology like moral philosophy?
You further say that you cannot easily change those intuitions. That is correct, but do we want to change them? Does it even make sense to ask if we want to have different intuitions?
I think that if we face conflicting preferences we don't want to change or discard the preferences with less weight but simply ignore them temporarily.
We are humans and that means that we are inconsistent agents without stable utility functions. Do we want to change that?
Further, I don't think that our "morality core" is all that important. We are highly adaptable and easily catch cultural induced memes that can override most of our "morality core". Just see how many people here claim that they would be killing the fat guy when faced with the trolley problem. That is a case where high-level cognition, cultural and academic "memes" hijack what you call our "morality core" in an attempt to resolve conflicting preferences by favoring the one that we assign the most weight.
I have no idea, I upvoted your post because I have the same question.
Why, to disguise it of course.