In response to a request, I am going to do some basic unpacking of second-order desire, or "metawanting". Basically, a second-order desire or metawant is a desire about a first-order desire.
Example 1: Suppose I am very sleepy, but I want to be alert. My desire to be alert is first-order. Suppose also that there is a can of Mountain Dew handy. I know that Mountain Dew contains caffeine and that caffeine will make me alert. However, I also know that I hate Mountain Dew1. I do not want the Mountain Dew, because I know it is gross. But it would be very convenient for me if I liked Mountain Dew: then I could drink it, and I could get the useful effects of the caffeine, and satisfy my desire for alertness. So I have the following instrumental belief: wanting to drink that can of Mountain Dew would let me be alert. Generally, barring other considerations, I want things that would get me other things I want - I want a job because I want money, I want money because I can use it to buy chocolate, I want chocolate because I can use it to produce pleasant taste sensations, and I just plain want pleasant taste sensations. So, because alertness is something I want, and wanting Mountain Dew would let me get it, I want to want the Mountain Dew.
This example demonstrates a case of a second-order desire about a first-order desire that would be instrumentally useful. But it's also possible to have second-order desires about first-order desires that one simply does or doesn't care to have.
Example 2: Suppose Mimi the Heroin Addict, living up to her unfortunate name, is a heroin addict. Obviously, as a heroin addict, she spends a lot of her time wanting heroin. But this desire is upsetting to her. She wants not to want heroin, and may take actions to stop herself from wanting heroin, such as going through rehab.
One thing that is often said is that what first-order desires you "endorse" on the second level are the ones that are your most true self. This seems like an appealing notion in Mimi's case; I would not want to say that at her heart she just wants heroin and that's an intrinsic, important part of her. But it's not always the case that the second-order desire is the one we most want to identify with the person who has it:
Example 3: Suppose Larry the Closet Homosexual, goodness only knows why his mother would name him that, is a closet homosexual. He has been brought up to believe that homosexuality is gross and wrong. As such, his first-order desire to exchange sexual favors with his friend Ted the Next-Door Neighbor is repulsive to him when he notices it, and he wants desperately not to have this desire.
In this case, I think we're tempted to say that poor Larry is a gay guy who's had an alien second-order desire attached to him via his upbringing, not a natural homophobe whose first-order desires are insidiously eroding his real personality.
A less depressing example to round out the set:
Example 4: Suppose Olivia the Overcoming Bias Reader, whose very prescient mother predicted she would visit this site, is convinced on by Eliezer's arguments about one-boxing in Newcomb's Problem. However, she's pretty sure that if Omega really turned up, boxes in hand, she would want to take both of them. She thinks this reflects an irrationality of hers. She wants to want to one-box.
1Carbonated beverages make my mouth hurt. I have developed a more generalized aversion to them after repeatedly trying to develop a taste for them and experiencing pain every time.
That seems to work very well. So the ethical weight of a factor can be proportional to the reciprocal thereof (perhaps with a sign change). Then, for any number of people, there is a maximum happiness-factor that the equation can produce.
So. This can be used to make an equation that makes Omelas bad for any sized population. But not everyone agrees that Omelas is bad in the first place; so is that necessarily an improvement to your ethical equation?
That failure mode can also be dealt with by combining equality with other factors, such as not being hurt. (The relative weightings assigned to these factors would be important, of course).
That seems like a reasonable definition; my point is that not everyone uses the same equation.
Hmmm. You're right - that was a bad example. (I don't know if you're familiar with the Chanur series, by C. J. Cherryh? I ask because my first thought for a better example came straight out of there - she does a god job of presenting alien moralities)
Let me provide a better one. Consider Marvin, and Fred. Marvin's moral system considers the total benefit to the world of every action; but he tends to weight actions in favour of himself, because he knows that in the future, he will always choose to do the right thing (by his morality) and thus deserves ties broken in his favour.
Fred's moral system entirely discounts any benefits to himself. He knows that most people are biased to themselves, and does this in an attempt to reduce the bias (he goes so far as to be biased in the opposite direction).
Both of them get into a war. Both end up in the following situation:
Trapped in a bunker, together with one allied soldier (a stranger, but on the same side). An enemy manages to throw a grenade in. The grenade will kill both of them, unless someone leaps on top of it, in which case it will only kill that one.
Fred leaps on top of the grenade. His morality values the life of the stranger over his own, and he thus acts to save the stranger first.
Marvin throws the stranger onto the grenade. His morality values his own life over a stranger who might, with non-trivial probability, be a truly villainous person.
Here we have two different moralities, leading to two different results, in the same situation.
That is worth keeping in mind. Of course, if such a system is found, we could feed in dozens of general situations in advance - and if in a tough situation, then after resolving it one way or another, we could feed that situation into the computer and find out for future reference which course of action was correct (that eliminates a lot of the time constraint).
That's true, the question is, how often is this because people have totally different values, and how often is it that they have extremely similar "ideal equations," but different "approximations" of what they think that equation is. I think for sociopaths, and other people with harmful ego-syntonic mental disorders it's probably the former, but its more often the later for normal people.
Eliezer has argued that it is confusing and misleadin... (read more)