The post heavily relies on moral internalism without arguing for it. Internalism holds that a necessary connection exists between sincere moral judgment and motivation. As the post says, "moral statements [...] seem to be self-motivating." I've never seen a deeply plausible argument for internalism, and I'm pretty sure it's false. The ability of many psychopaths to use moral language in a normal way, and in some cases to agree that they've done evil and assert that they just don't care, would seem to refute it.
Upvoted for giving a clear statement of an anti-realist view.
...One way of phrasing a moral non-cognitivist position is to say that moral statements are properly thought of as expressions of an individual's utility function rather than sentences describing the world.
Note that 'expressions of an individual's utility function' is not the same as 'sentences describing an individual's utility function'. The latter is something like 'I prefer chocolate to vanilla' the former is something like 'Mmmm chocolate!'. It's how the utility function feels from the inside. And the way a utility function feels from the inside appears
Fantastic post. It goes a long way toward dissolving the question.
On the left we have the external world which generates the sensory inputs our agent uses to form beliefs.
Rhetorical question one: how is the singular term "agent" justified when there is a different configuration of molecules in the space the "agent" occupies from moment to moment? Wouldn't "agents" be better? What if the agent gets hit by a non-fatal brain-altering gamma ray burst or something? There's no natural quantitative point to say we have "an a...
So far, if I understand all of the content of this post correctly, this seems like a much more elegant and well-written account of my own beliefs about morality than my previous clumsy attempt at it.
The above is the anti-realist position given in terms I think Less Wrong is comfortable with. It has the following things in it's favor: it does not posit any 'queer' moral properties as having objective existence and it fully accounts for the motivating, prescriptive aspect of moral language. It is psychologically sound, albeit simplistic. It is naturalistic while providing an account of why meta-ethics is so confusing to begin with. It explains our naive moral realism and dissolves the difference between the two prominent anti-realist camps.
..and its ...
For the definition of "moral" that includes how people tend to use the term, this seems about right. However, the word "morality" is used in many different ways. For example, the "morality" I think about when I am legitimately wondering what action I should take - and not letting just an emotional reaction guide my actions - is in the ideal map (it's my preferences).
Well done. Here's a way to bridge the is-ought distinction: It's possible that investigating our map of our morality — that is, the littlest blue circle in your diagram — will yield a moral argument that we find compelling.
x(x {is an element of} IM) & (x = M)
Shouldn't x be a subset of IM rather than an element?
Also, do you somewhere define what the ideal map is?
Here's what a moral realist might say:
The 'morality' module within the utility function is pretty similar across all humans.
Given that our evolved morality is in part used to solve cooperation and other game theoretic problems, a rational psychopath might want to self-modify to care about 'morality'.
I see a few broken image links, eg in "Moral Realism: x(x IM) & (x = M)" there is a broken image graphic.
Below is a sketch of a moral anti-realist position based on the map-territory distinction, Hume and studies of psychopaths. Hopefully it is productive.
The Map is Not the Territory Reviewed
Consider the founding metaphor of Less Wrong: the map-territory distinction. Beliefs are to reality as maps are to territory. As the wiki says:
Of course the map is not the territory.
Here is Albert Einstein making much the same analogy:
The above notions about beliefs involve pictorial analogs, but we can also imagine other ways the same information could be contained. If the ideal map is turned into a series of sentences we can define a 'fact' as any sentence in the ideal map (IM). The moral realist position can then be stated as follows:
Moral Realism: ∃x(x ⊂ IM) & (x = M)
In English: there is some set of sentences x such that all the sentences are part of the ideal map and x provides a complete account of morality.
Moral anti-realism simply negates the above. ¬(∃x(x ⊂ IM) & (x = M)).
Now it might seem that, as long as our concept of morality doesn't require the existence of entities like non-natural gods, which don't appear to figure into an ideal map, moral realism must be true (where else but the territory could morality be?). The problem of ethics then, is chiefly one of finding a satisfactory reduction of moral language into sentences we are confident of finding in the IM. Moreover, the 'folk' meta-ethics certainly seems to be a realist one. People routinely use moral predicates and speak of having moral beliefs. "Stealing that money was wrong", "I believe abortion is immoral", "Hitler was a bad person". In other words, in the maps people *actually have right now* a moral code seems to exist.
Beliefs vs. Preferences
But we don't think talking about belief networks is sufficient for modeling an agent's behavior. To predict what other agents will do we need to know both their beliefs and their preferences (or call them goals, desires, affect or utility function). And when we're making our own choices we don't think we're responding merely to beliefs about the external world. Rather, it seems like we're also responding to an internal algorithm that helps us decide between actions according to various criteria, many of which reference the external world.
The distinction between belief function and utility function shouldn't be new to anyone here. I bring it up because the queer thing about moral statements is that they seem to be self-motivating. They're not merely descriptive, they're prescriptive. So we have a good reason to think that they call our utility function. One way of phrasing a moral non-cognitivist position is to say that moral statements are properly thought of as expressions of an individual's utility function rather than sentences describing the world.
Note that 'expressions of an individual's utility function' is not the same as 'sentences describing an individual's utility function'. The latter is something like 'I prefer chocolate to vanilla' the former is something like 'Mmmm chocolate!'. It's how the utility function feels from the inside. And the way a utility function feels from the inside appears to be, or at least involve, emotion.
Projectivism and Psychopathy
That our brains might routinely turn expressions of our utility function into properties of the external world shouldn't be surprising. This was essentially Hume's position. From the Stanford Encyclopedia of Philosophy.
This account has a surface plausibility. Moreover, it has substantial support in psychological literature. In particular, the behavior of psychopaths closely matches what we would expect if the projectivist thesis were true. The distinctive neurobiological feature of psychopathy is impaired function of the amygdala. The amygdala mainly associated with emotional processing and memory. Obviously, as a group psychopaths tend toward moral deficiency. But more importantly psychopaths fail to make the normal human distinction between morality and convention. Thus a plausible account of a moral judgment is that it requires both social convention and emotional reaction. See the work of Shaun Nichols, in particular this for an extended discussion of the implications of psychopathy on metaethics and his book for a broader, empirically informed account of sentimentalist morality. Auditory learners might benefit from this bloggingheads he did.
If the projectivist account is right the difference between non-cognitivism and error theory is essentially one of emphasis. If you want to call moral judgments beliefs based on the above account then you are an error theorist. If you think they're a kind of pseudo-belief then you're a non-cognitivist.
But utility functions are part of the territory described by the map!
Modeling reality has a recursive element which tends to generate considerable confusion over multiple domains. The issue is that somewhere in any good map of the territory will be a description of the agent doing the mapping. So agents end up with beliefs about what they believe and beliefs about what they desire. Thus, we might think there could be a set of sentences in IM that make up our morality so long as some of those sentences describe our utility function. That is, the motivational aspect of morality can be accounted for by including in the reduction both a) a sentence which describes what conditions are to be preferred to others and b) a statement which says that the agent prefers such conditions.
The problem is, our morality doesn't seem completely responsive to hypothetical and counter-factual shifts in what our utility function is. That is, *if* I thought causing suffering in others was something I should do and I got good feelings from doing it that *wouldn't* make causing suffering moral (though Sadist Jack might think it was). In other words, changing one's morality function isn't a way to change what is moral (perhaps this judgment is uncommon, we should test it).
This does not mean the morality subroutine of your utility function isn't responsive to changes in other parts of the utility function. If you think fulfilling your own non-moral desires is a moral good then which actions are moral will depend on how your non-moral desires change. But hypothetical changes in our morality subroutine don't change our moral judgments about our actions in the hypothetical. This is because when we make moral judgments we *don't* look at our map of the world to find our what our morality says, rather we have an emotional reaction to a set of facts and that emotional reaction generates the moral belief. Below is a diagram that somewhat messily describes what I'm talking about.
On the left we have the external world which generates the sensory inputs our agent uses to form beliefs. Those beliefs are then input into the utility function, a subroutine of which is morality. The utility function outputs the action the agent chooses. On the right we have zoomed in on the green Map circle from the left. Here we see that the map includes moral 'beliefs' (note that this isn't an ideal map) which have been projected from the morality subroutine in the utility function. Then we have, also within the Map, the self-representation of the agent which in turn includes her algorithms and mental states. Note that altering morality of the self-representation won't change the output of the morality subroutine of the first level of the model. Of course, in an ideal map the self-representation would match the first level but that doesn't change the causal or phenomenal story of how moral judgments are made.
Observe how easy it is to make category errors if this model is accurate. Since we're projecting our moral subroutine onto our map and we're depicting ourselves in the map it is very easy to think that morality is something we're learning about from the external world (if not from sensory input then from a priori reflection!). Of course, morality is in the external world in a meaningful sense since our brains are in the external world. But learning what is in our brains is not motivating in the way moral judgments are supposed to be. This diagram explains why: the facts about our moral code in our self-representation are not directly connected to our choice circuits which cause us to perform actions. Simply stating what our brains are like will not activate our utility function and so the expressive content of moral language will be left out. This is Hume's is-ought distinction- 'ought' sentences can't be derived from 'is' sentences because ought sentences involve the activation of the utility function at the first level of the diagram, whereas 'is' sentences are exclusively part of the map.
And of course since agents can have different morality functions there are no universally compelling arguments.
The above is the anti-realist position given in terms I think Less Wrong is comfortable with. It has the following things in it's favor: it does not posit any 'queer' moral properties as having objective existence and it fully accounts for the motivating, prescriptive aspect of moral language. It is psychologically sound, albeit simplistic. It is naturalistic while providing an account of why meta-ethics is so confusing to begin with. It explains our naive moral realism and dissolves the difference between the two prominent anti-realist camps.