Below is a sketch of a moral anti-realist position based on the map-territory distinction, Hume and studies of psychopaths. Hopefully it is productive.
The Map is Not the Territory Reviewed
Consider the founding metaphor of Less Wrong: the map-territory distinction. Beliefs are to reality as maps are to territory. As the wiki says:
Since our predictions don't always come true, we need different words to describe the thingy that generates our predictions and the thingy that generates our experimental results. The first thingy is called "belief", the second thingy "reality".
Of course the map is not the territory.
Here is Albert Einstein making much the same analogy:
Physical concepts are free creations of the human mind and are not, however it may seem, uniquely determined by the external world. In our endeavor to understand reality we are somewhat like a man trying to understand the mechanism of a closed watch. He sees the face and the moving hands, even hears its ticking, but he has no way of opening the case. If he is ingenious he may form some picture of a mechanism which could be responsible for all the things he observes, but he may never be quite sure his picture is the only one which could explain his observations. He will never be able to compare his picture with the real mechanism and cannot even imagine the possibility or the meaning of such a comparison. But he certainly believes that, as his knowledge increases, his picture of reality will become simpler and simpler and will explain a wider and wider range of his sensuous impressions. He may also believe in the existence of the ideal limit of knowledge and that it is approached by the human mind. He may call this ideal limit the objective truth.
The above notions about beliefs involve pictorial analogs, but we can also imagine other ways the same information could be contained. If the ideal map is turned into a series of sentences we can define a 'fact' as any sentence in the ideal map (IM). The moral realist position can then be stated as follows:
Moral Realism: ∃x(x ⊂ IM) & (x = M)
In English: there is some set of sentences x such that all the sentences are part of the ideal map and x provides a complete account of morality.
Moral anti-realism simply negates the above. ¬(∃x(x ⊂ IM) & (x = M)).
Now it might seem that, as long as our concept of morality doesn't require the existence of entities like non-natural gods, which don't appear to figure into an ideal map, moral realism must be true (where else but the territory could morality be?). The problem of ethics then, is chiefly one of finding a satisfactory reduction of moral language into sentences we are confident of finding in the IM. Moreover, the 'folk' meta-ethics certainly seems to be a realist one. People routinely use moral predicates and speak of having moral beliefs. "Stealing that money was wrong", "I believe abortion is immoral", "Hitler was a bad person". In other words, in the maps people *actually have right now* a moral code seems to exist.
Beliefs vs. Preferences
But we don't think talking about belief networks is sufficient for modeling an agent's behavior. To predict what other agents will do we need to know both their beliefs and their preferences (or call them goals, desires, affect or utility function). And when we're making our own choices we don't think we're responding merely to beliefs about the external world. Rather, it seems like we're also responding to an internal algorithm that helps us decide between actions according to various criteria, many of which reference the external world.
The distinction between belief function and utility function shouldn't be new to anyone here. I bring it up because the queer thing about moral statements is that they seem to be self-motivating. They're not merely descriptive, they're prescriptive. So we have a good reason to think that they call our utility function. One way of phrasing a moral non-cognitivist position is to say that moral statements are properly thought of as expressions of an individual's utility function rather than sentences describing the world.
Note that 'expressions of an individual's utility function' is not the same as 'sentences describing an individual's utility function'. The latter is something like 'I prefer chocolate to vanilla' the former is something like 'Mmmm chocolate!'. It's how the utility function feels from the inside. And the way a utility function feels from the inside appears to be, or at least involve, emotion.
Projectivism and Psychopathy
That our brains might routinely turn expressions of our utility function into properties of the external world shouldn't be surprising. This was essentially Hume's position. From the Stanford Encyclopedia of Philosophy.
Projectivism is best thought of as a causal account of moral experience. Consider a straightforward, observation-based moral judgment: Jane sees two youths hurting a cat and thinks “That is impermissible.” The causal story begins with a real event in the world: two youth performing actions, a suffering cat, etc. Then there is Jane's sensory perception of this event (she sees the youths, hears the cat's howls, etc.). Jane may form certain inferential beliefs concerning, say, the youths' intentions, the cats' pain, etc. All this prompts in Jane an emotion: She disapproves (say). She then “projects” this emotion onto her experience of the world, which results in her judging the action to be impermissible. In David Hume's words: “taste [as opposed to reason] has a productive faculty, and gilding and staining all natural objects with the colours, borrowed from internal sentiment, raises in a manner a new creation” (Hume [1751] 1983: 88). Here, impermissibility is the “new creation.” This is not to say that Jane “sees” the action to instantiate impermissibility in the same way as she sees the cat to instantiate brownness; but she judges the world to contain a certain quality, and her doing so is not the product of her tracking a real feature of the world, but is, rather, prompted by an emotional experience.
This account has a surface plausibility. Moreover, it has substantial support in psychological literature. In particular, the behavior of psychopaths closely matches what we would expect if the projectivist thesis were true. The distinctive neurobiological feature of psychopathy is impaired function of the amygdala. The amygdala mainly associated with emotional processing and memory. Obviously, as a group psychopaths tend toward moral deficiency. But more importantly psychopaths fail to make the normal human distinction between morality and convention. Thus a plausible account of a moral judgment is that it requires both social convention and emotional reaction. See the work of Shaun Nichols, in particular this for an extended discussion of the implications of psychopathy on metaethics and his book for a broader, empirically informed account of sentimentalist morality. Auditory learners might benefit from this bloggingheads he did.
If the projectivist account is right the difference between non-cognitivism and error theory is essentially one of emphasis. If you want to call moral judgments beliefs based on the above account then you are an error theorist. If you think they're a kind of pseudo-belief then you're a non-cognitivist.
But utility functions are part of the territory described by the map!
Modeling reality has a recursive element which tends to generate considerable confusion over multiple domains. The issue is that somewhere in any good map of the territory will be a description of the agent doing the mapping. So agents end up with beliefs about what they believe and beliefs about what they desire. Thus, we might think there could be a set of sentences in IM that make up our morality so long as some of those sentences describe our utility function. That is, the motivational aspect of morality can be accounted for by including in the reduction both a) a sentence which describes what conditions are to be preferred to others and b) a statement which says that the agent prefers such conditions.
The problem is, our morality doesn't seem completely responsive to hypothetical and counter-factual shifts in what our utility function is. That is, *if* I thought causing suffering in others was something I should do and I got good feelings from doing it that *wouldn't* make causing suffering moral (though Sadist Jack might think it was). In other words, changing one's morality function isn't a way to change what is moral (perhaps this judgment is uncommon, we should test it).
This does not mean the morality subroutine of your utility function isn't responsive to changes in other parts of the utility function. If you think fulfilling your own non-moral desires is a moral good then which actions are moral will depend on how your non-moral desires change. But hypothetical changes in our morality subroutine don't change our moral judgments about our actions in the hypothetical. This is because when we make moral judgments we *don't* look at our map of the world to find our what our morality says, rather we have an emotional reaction to a set of facts and that emotional reaction generates the moral belief. Below is a diagram that somewhat messily describes what I'm talking about.
On the left we have the external world which generates the sensory inputs our agent uses to form beliefs. Those beliefs are then input into the utility function, a subroutine of which is morality. The utility function outputs the action the agent chooses. On the right we have zoomed in on the green Map circle from the left. Here we see that the map includes moral 'beliefs' (note that this isn't an ideal map) which have been projected from the morality subroutine in the utility function. Then we have, also within the Map, the self-representation of the agent which in turn includes her algorithms and mental states. Note that altering morality of the self-representation won't change the output of the morality subroutine of the first level of the model. Of course, in an ideal map the self-representation would match the first level but that doesn't change the causal or phenomenal story of how moral judgments are made.
Observe how easy it is to make category errors if this model is accurate. Since we're projecting our moral subroutine onto our map and we're depicting ourselves in the map it is very easy to think that morality is something we're learning about from the external world (if not from sensory input then from a priori reflection!). Of course, morality is in the external world in a meaningful sense since our brains are in the external world. But learning what is in our brains is not motivating in the way moral judgments are supposed to be. This diagram explains why: the facts about our moral code in our self-representation are not directly connected to our choice circuits which cause us to perform actions. Simply stating what our brains are like will not activate our utility function and so the expressive content of moral language will be left out. This is Hume's is-ought distinction- 'ought' sentences can't be derived from 'is' sentences because ought sentences involve the activation of the utility function at the first level of the diagram, whereas 'is' sentences are exclusively part of the map.
And of course since agents can have different morality functions there are no universally compelling arguments.
The above is the anti-realist position given in terms I think Less Wrong is comfortable with. It has the following things in it's favor: it does not posit any 'queer' moral properties as having objective existence and it fully accounts for the motivating, prescriptive aspect of moral language. It is psychologically sound, albeit simplistic. It is naturalistic while providing an account of why meta-ethics is so confusing to begin with. It explains our naive moral realism and dissolves the difference between the two prominent anti-realist camps.
Compute CEV. Then actually do learn and become this better person that was modeled to compute the CEV. See if you prefer the CEV or any other possible utility function.
Asymptotic estimations could also be made IFF utility function spaces are continuous and can be mapped by similarity: If as you learn more true things from a random sample and ordering of all possible true things you could learn, gain more brain computing power, and gain a better capacity for self-reflection, your preferences tend towards CEV-predicted preferences, then CEV is almost certainly true.
D(x) and U(y) make opposite recommendations. x and y are different intuitions from different people, and these intuitions may or may not match the actual morality functions inside the brains of their proponents.
I can find no measure of which recommendation is "correct" other than inside my own brain somewhere. This directly implies that it is "correct for Frank's Brain", not "correct universally" or "correct across all humans".
Based on this reasoning, if I use my moral intuition to reason about the the fat man trolley problem problem using D() and find the conclusion correct within context, then D is correct for me, and the same goes for U(). So let's try it!
My primary deontological rule: When there exist counterfactual possible futures where the expected number of deaths is lower than all other possible futures, always take the course of action which leads to this less-expected-deaths future. (In simple words: Do Not Kill, where inaction that leads to death is considered Killing).
A train is going to hit five people. There is a fat man which I can push down to save the five people with 90% probability. (let's just assume I'm really good at quickly estimating this kind of physics within this thought experiment)
If I don't push the fat man, 5 people die with 99% probability (shit happens). If I push the fat man, 1 person dies with 99% probabilty (shit happens), and the 5 others still die with 10% probability.
Expected deaths of not-pushing: 4.95.
Expected deaths of pushing: 1.49.
I apply the deontological rule. That fat man is doomed.
Now let's try the utilitarian vers-- Oh wait. That's already what we did. We created a deontological rule that says to pick the highest expected utility action, and that's also what utilitarianism tells me to do.
See what I mean when I say there is no meaningful distinction? If you calibrate your rules consistently, all "moral theories" I see philosophers arguing about produce the same output. Equal output, in fact.
So to return to the earlier point: D(trolley, Frank's Rule) is correct where trolley is the problem and Frank's is the rules I find most moral. U(trolley, Frank's Utility Function) is also correct. D(trolley, ARBITRARY RULE PICKED AT RANDOM FROM ALL POSSIBLE RULES) is incorrect for me. Likewise, U(trolley, ARBITRARY UTILITY FUNCTION PICKED AT RANDOM FROM ALL POSSIBLE UTILITY FUNCTION) is incorrect for me.
This means that U(trolley) and D(trolley) cannot be "correct" or "incorrect, because in typical Functional Programming fashion, U(trolley) and D(trolley) return curried functions, that is, they return a function of a certain type which takes a rule (for D) or a utility function (for U) and returns a recommendation based on this for the trolley problem.
To reiterate some previous claims of mine, in which I am fairly confident, in the above jargon: There does not exist any single-parameter U(x) or D(x) functions that return a single truth-valuable recommendation without any rules or utility functions as input. All deontological systems rely on the rules supplied to them, and all utilitarian systems rely on the utility functions supplied to them. There exists a utility function equivalent to each possible rule, and there exists a rule equivalent to each possible utility function.
The rules or the utility functions are inside human brains. And either can be expressed in terms of the other interchangeably - which we use is merely a matter of convenience as one will correspond to the brain's algorithm more easily than the other.
I suspect that defining deontology as obeying the single rule "maximize utility" would be a non-central redefinition of the term. something most deontologists would find unacceptable.