In response to a request, I am going to do some basic unpacking of second-order desire, or "metawanting". Basically, a second-order desire or metawant is a desire about a first-order desire.
Example 1: Suppose I am very sleepy, but I want to be alert. My desire to be alert is first-order. Suppose also that there is a can of Mountain Dew handy. I know that Mountain Dew contains caffeine and that caffeine will make me alert. However, I also know that I hate Mountain Dew1. I do not want the Mountain Dew, because I know it is gross. But it would be very convenient for me if I liked Mountain Dew: then I could drink it, and I could get the useful effects of the caffeine, and satisfy my desire for alertness. So I have the following instrumental belief: wanting to drink that can of Mountain Dew would let me be alert. Generally, barring other considerations, I want things that would get me other things I want - I want a job because I want money, I want money because I can use it to buy chocolate, I want chocolate because I can use it to produce pleasant taste sensations, and I just plain want pleasant taste sensations. So, because alertness is something I want, and wanting Mountain Dew would let me get it, I want to want the Mountain Dew.
This example demonstrates a case of a second-order desire about a first-order desire that would be instrumentally useful. But it's also possible to have second-order desires about first-order desires that one simply does or doesn't care to have.
Example 2: Suppose Mimi the Heroin Addict, living up to her unfortunate name, is a heroin addict. Obviously, as a heroin addict, she spends a lot of her time wanting heroin. But this desire is upsetting to her. She wants not to want heroin, and may take actions to stop herself from wanting heroin, such as going through rehab.
One thing that is often said is that what first-order desires you "endorse" on the second level are the ones that are your most true self. This seems like an appealing notion in Mimi's case; I would not want to say that at her heart she just wants heroin and that's an intrinsic, important part of her. But it's not always the case that the second-order desire is the one we most want to identify with the person who has it:
Example 3: Suppose Larry the Closet Homosexual, goodness only knows why his mother would name him that, is a closet homosexual. He has been brought up to believe that homosexuality is gross and wrong. As such, his first-order desire to exchange sexual favors with his friend Ted the Next-Door Neighbor is repulsive to him when he notices it, and he wants desperately not to have this desire.
In this case, I think we're tempted to say that poor Larry is a gay guy who's had an alien second-order desire attached to him via his upbringing, not a natural homophobe whose first-order desires are insidiously eroding his real personality.
A less depressing example to round out the set:
Example 4: Suppose Olivia the Overcoming Bias Reader, whose very prescient mother predicted she would visit this site, is convinced on by Eliezer's arguments about one-boxing in Newcomb's Problem. However, she's pretty sure that if Omega really turned up, boxes in hand, she would want to take both of them. She thinks this reflects an irrationality of hers. She wants to want to one-box.
1Carbonated beverages make my mouth hurt. I have developed a more generalized aversion to them after repeatedly trying to develop a taste for them and experiencing pain every time.
Well, I am basically asserting that morality is some sort of objective equation, or "abstract idealized dynamic," as Eliezer calls it, concerned with people's wellbeing. And I am further asserting that most human beings care very much about this concept. I think this would make the following predictions:
In a situation where a given group of humans had similar levels of empirical knowledge and a similar sanity waterline there would be far more moral agreement among them than would be predicted by chance, and far less moral disagreement than is mentally possible.
It is physically possible to persuade people to change their moral values by reasoned argument.
Inhabitants of a society who are unusually rational and intelligent will be the first people in that society to make moral progress, as they will be better at extrapolating answers out of the "equation."
If one attempted to convert the moral computations people make into an abstract, idealized process, and determine it's results, many people would find those results at least somewhat persuasive, and may find their ethical views changed by observing them.
All of these predictions appear to be true:
Human societies tend to have a rather high level of moral agreement between their members. Conformity is not necessarily an indication of rightness, it seems fairly obvious that whole societies have held gravely mistaken moral views, such as those that believed slavery was good. However, it is interesting that all those people in those societies were mistaken in exactly the same way. That seems like evidence that they were all reasoning towards similar conclusions, and the mistakes they made were caused by common environmental factors that impacted all of them. There are other theories that explain this data of course, (peer pressure, for instance), but I still find it striking.
I've had moral arguments made by other people change my mind, and changed the minds of other people by moral argument. I'm sure you have also had this experience.
It is well known that intellectuals tend to develop and adopt new moral theories before the general populace does. Common examples of intellectuals whose moral concepts have disseminated into the general populace include John Locke, Jeremy Bentham, and William Lloyd Garrison. Many of these peoples' principles have since been adopted into the public consciousness.
Ethical theorists who have attempted to derive new ethical principles by working from an abstract, idealized form of ethics have often been very persuasive. To name just one example, Peter Singer ended up turning thousands of people into vegetarians with moral arguments that started on a fairly abstract level.
Asserting that those values comprise morality seems to be effective because it seems to most people that those values are related in some way, because they form the superconcept "morality." Morality is a useful catchall term for certain types of values, and it would be a shame to lose it.
Still, I suppose that asserting "I value happiness, freedom, fairness, etc" is similar enough to saying "I care about morality" that I really can't object terribly strongly if that's what you'd prefer to do.
Why does doing that bother you? Presumably, because you care about the moral concept of fairness, and don't want to claim an unfair level of status for you and your views. But does it really make sense to say "I care about fairness, but I want to be fair to other people who don't care about it, so I'll go ahead and let them treat people unfairly, in order to be fair." That sounds silly, doesn't it? it has the same problems that come with being tolerant of intolerant people.
All of those predictions seem equally likely to me whether Sam is right or George is, so don't really engage with my question at all. At this point, after several trips 'round the mulberry bush, I conclude that this is not because I'm being unclear with my question but rather because you're choosing not to answer it, so I will stop trying to clarify the question further.
If I map your predictions and observations to the closest analogues that make any sense to me at all, I basically agree with them.
... (read more)