I suspect most cases of "wanting to want" are better described as cases of internal conflict, where one part of us wishes that there weren't other parts of us with different conflicting wants.
Particularly where one part is responsible for the "internal narrative" and the other is responsible for motivation and prioritization, because the latter usually wins out and the former complains loudest.
Example 2: Suppose Mimi the Heroin Addict, living up to her unfortunate name, is a heroin addict. Obviously, as a heroin addict, she spends a lot of her time wanting heroin. But this desire is upsetting to her. She wants not to want heroin, and may take actions to stop herself from wanting heroin, such as going through rehab.
Example 3: Suppose Larry the Closet Homosexual, goodness only knows why his mother would name him that, is a closet homosexual. He has been brought up to believe that homosexuality is gross and wrong. As such, his first-order desire to exchange sexual favors with his friend Ted the Next-Door Neighbor is repulsive to him when he notices it, and he wants desperately not to have this desire.
I'm really bothered by my inability to see how to distinguish between these two classes of meta-wants. I suppose you just punt it off to your moral system, or your expected-value computations.
Looking at it, I think that the difference is that Larry the Closet Homosexual probably doesn't really have a second order desire to not be gay. What he has is a second order desire to Do the Right Thing, and mistakenly believes that homosexuality isn't the Right Thing. So we naturally empathize with Larry, because his conflict between his first and second order desires is unnecessary. If he knew that homosexuality wasn't wrong the conflict would disappear, not because his desires had changed, but because he had better knowledge about how to achieve them.
Mimi the Heroin Addict, by contrast, probably doesn't want to want heroin because it obstructs her from obtaining other important life goals that she genuinely wants and approves of. If we were too invent some sort of Heroin 2.0 that lacked most of heroin's negative properties (i.e. removing motivation to achieve your life goals, health problems) Mimi would probably be much less upset about wanting it.
In the interests of avoiding introducing complications into the thought experiment, I assumed that Larry was, aside from his sexual orientation, a fairly psychologically normal human who had normal human terminal goals, like an interest in sex and romantic love. I also assumed, again to avoid complications (and from clues in the story) that he probably lived, like most Less Wrong readers and writers, in a First World liberal democracy in the early 21st century.
The reasoning process I used to determine his belief was mistaken was a consequentialist meta-ethic that produces the results "Consensual sex and romance are Good Things unless they seriously interfere with some other really important goal." I assumed that Larry, being a psychologically normal human in a tolerant country, did not have any other important goals they interfered with. He probably either mistakenly believed that a supernatural creature of immense power existed and would be offended by his homosexuality, or mistakenly believed in some logically incoherent deontological set of rules that held that desires for consensual sex and romance somehow stop being Good Things if the object of those desires is of...
Let me try using an extended metaphor to explain my point: Remember Eliezer's essay on the Pebblesorters, the aliens obsessed with sorting pebbles into prime-numbered heaps?
Let's imagine a race of Pebblesorters that's p-morality consists of sorting pebbles into prime-numbered heaps. All Pebblesorters have a second-order desire to sort pebbles into prime-numbered heaps, and ensure that others do so as well. In addition to this, individual Pebblesorters have first order desires that make them favor certain prime numbers more than others when they are sorting.
Now let's suppose there is a population of Pebblesorters who usually favor pebble heaps consisting of 13 pebbles but occasionally a mutant is born that likes to make 11-pebble heaps best of all. However, some of the Pebblesorters who prefer 13-pebble heaps have somehow come to the erroneous conclusion that 11 isn't a prime number. Something, perhaps some weird Pebblesorter versions of pride and self-deception, makes them refuse to admit their error.
The 13-Pebble Favorers become obsessed with making sure no Pebblesorters make heaps of 11 pebbles, since 11 obviously isn't a prime number. They begin to persecute 11-Pebble Fav...
We should point people to this whenever they're like "What's special about Less Wrong?" and we can be like "Okay, first, guess how Less Wrong would discuss a reluctant Christian homosexual. Made the prediction? Good, now click this link."
I upvoted despite this. If you overlook that one problem, everything else is gold. That single flawed sentence does not effect the awesome of the other 14 paragraphs, as it does not contribute to the conclusion.
if there were a pill that converted homosexuals to heterosexuals, then the question of how society treats homosexuals would actually be different, and if Larry asked you to help him make the decision of whether or not to take the pill, I'm sure you could think of some things to write in the "pro" column for "take the pill" and in the "con" column for "don't take the pill."
I don't deny that there may be some good reasons to prefer to be heterosexual. For instance, imagine Larry lives in an area populated by very few homosexual and bisexual men, and moving somewhere else is prohibitively costly for some reason. If this is the case, then Larry may have a rational second-order desire to become bisexual or heterosexual, simply because doing so would make it much easier to find romantic partners.
However, I would maintain that the specific reason given in Alicorn's orignal post for why Larry desires to not be homosexual is that he is confused about the morality of homosexuality and is afraid he is behaving immorally, not because he has two genuine desires that conflict.
...It's also worth considering how much one wants to engage in sour grapes th
Harry Frankfurt, who came up with the original idea, did a much better job in explaining in my opinion. (Why are you not referring to his paper?)
Here is the link for the curious: http://www.usfca.edu/philosophy/pdf%20files/Freedom%20of%20the%20Will%20and%20the%20Concept%20of%20a%20Person.pdf
It's not always so easy to say which desire is actually first order and which is second order.
For instance, example 3 could be inverted:
Larry was brought up to believe God hates homosexuality. Because of this he experiences genuine disgust when he thinks about homosexual acts, and so desires not to perform them or even think about them (first order). However, he really likes his friend Ted and sometimes wished God wasn't such a dick (second order).
There's likely even a third order desire: Larry was brought up to be a good Christian, and desperately wishes...
There's a simpler model for all of these examples -- you're describing conflicts between an "away-from" motivation and a "towards" motivation. These systems are semi-independent, via affective asynchrony. The second-order want is then arising as a subgoal of the currently-active goal (be alert, etc.).
I guess what I'm trying to say here is that there really aren't "second order wants" in the system itself; they're just an emergent property of a system with subgoals that explicitly models itself as an agent, especially if it a...
Carbonated beverages make my mouth hurt. I have developed a more generalized aversion to them after repeatedly trying to develop a taste for them and experiencing pain every time.
Wait, that's unusual? I used to have the exact same problem, but I thought it was due to generalized willpower issues. When I got better at willpower, the problem disappeared (although I still tend to choose non-carbonated versions of drinks I like if possible.)
I'm glad you introduced me to the term meta-wanting because it reminds me on an argument against free will.
Basically, you can go to a CD store (itunes now) and you can choose which CD you choose to buy because you prefer that CD. But you cannot prefer to prefer that CD. You simply prefer (1st order) that CD. You could try to raise the order of your preferences (an idea that had not occurred to me until now), but at the next highest order, your decision has already been made.
To me, that is the most convincing argument against free will that I've ever come across. Has anyone heard it before?
I don't think the right way to clarify this problem is by looking at it terms of first- and second-level desires. I think you need to turn it around and see it as a matter of what 'true self' means.
If people say that the desires you "endorse" on the second level are most reflective of your true self, they're wrong. This is because what we take to define our true selfs is based on different criteria, and those criteria define it such that people's second-level desires don't always match up with what we taken their 'true selves' to be, as in the case of Larry.
In a perfectly rational agent, no n-th order wants should exist.
Your problems with mountain dew might account for -1 util, you being awake for 2 utils, then you "want" to drink that stuff. Shut up and add.
The only source of multi-level desires I can see is an imperfect caching algorithm, which spews forth "Do not drink mountain dew" although the overall utility would be positive.
So far so good. I look forward to the hard stuff :) And thanks for engaging my request.
Actually, your calling second-level agreement "endorsement" has led me to wonder whether there's a special term for desires that you want to want, want to want to want, and so on ad infinitum, analogous to common knowledge) or Hofstadter's hyperrational groups (where everyone knows that everyone knows etc. that everyone is rational).
Suppose also that there is a can of Mountain Dew handy. I know that Mountain Dew contains caffeine and that caffeine will make me alert.
I am hesitant to bring it up because I don't want to become the multiculturalism police on LessWrong, but I found this distracting. American Mountain Dew contains a large caffeine content yet in most other countries Mountain Dew is Caffeine free. There is a significant minority of LessWrong participants who do not dwell in America and those readers can not help but become distracted when posts seem to be clearly inten...
I think a possible solution would be to have equality and the other values have diminishing returns relative to each other.
That seems to work very well. So the ethical weight of a factor can be proportional to the reciprocal thereof (perhaps with a sign change). Then, for any number of people, there is a maximum happiness-factor that the equation can produce.
So. This can be used to make an equation that makes Omelas bad for any sized population. But not everyone agrees that Omelas is bad in the first place; so is that necessarily an improvement to your ethical equation?
I think one possible way to frame equality to avoid this is to imagine, metaphorically, that positive things give a society "morality points" and negative things give it "negative morality points." Then have it so that a positive deed that also decreases equality gets "extra points," while a negative deed that also exasperates inequality gets "extra negative points." So in other words, helping the rich isn't bad, it's just much less good than helping the poor.
This also avoids another failure mode: Imagine an action that hurts every single person in the world, and hurts the rich 10 times as much as it hurts the poor. Such an action would increase equality, but praising it seems insane. Under the system I proposed such an action would still count as "bad," though it would be a bit less bad than a bad action that also increased inequality.
That failure mode can also be dealt with by combining equality with other factors, such as not being hurt. (The relative weightings assigned to these factors would be important, of course).
I don't think that's that different from what I'm saying, I may be explaining it poorly. I do think that morality is essentially like a set of rules or an equation that one uses to evaluate actions. And I consider it objective in that the same equation should produce the same result each time an identical action is fed into it, regardless of what entity is doing the feeding. Then it is up to our moral emotions to motivate us to take actions the equation would label as "good."
That seems like a reasonable definition; my point is that not everyone uses the same equation.
It seems to me that this is more a disagreement about certain facts of nature than about morality per se.
Hmmm. You're right - that was a bad example. (I don't know if you're familiar with the Chanur series, by C. J. Cherryh? I ask because my first thought for a better example came straight out of there - she does a god job of presenting alien moralities)
Let me provide a better one. Consider Marvin, and Fred. Marvin's moral system considers the total benefit to the world of every action; but he tends to weight actions in favour of himself, because he knows that in the future, he will always choose to do the right thing (by his morality) and thus deserves ties broken in his favour.
Fred's moral system entirely discounts any benefits to himself. He knows that most people are biased to themselves, and does this in an attempt to reduce the bias (he goes so far as to be biased in the opposite direction).
Both of them get into a war. Both end up in the following situation:
Trapped in a bunker, together with one allied soldier (a stranger, but on the same side). An enemy manages to throw a grenade in. The grenade will kill both of them, unless someone leaps on top of it, in which case it will only kill that one.
Fred leaps on top of the grenade. His morality values the life of the stranger over his own, and he thus acts to save the stranger first.
Marvin throws the stranger onto the grenade. His morality values his own life over a stranger who might, with non-trivial probability, be a truly villainous person.
Here we have two different moralities, leading to two different results, in the same situation.
I think the problem is that an objectively correct set of moral rules that could perfectly evaluate any situation would be so complicated no one would be able to use it effectively. Even if we obtained such a system we would have to use crude approximations until we managed to get a supercomputer big enough to do the calculations in a timely manner.
That is worth keeping in mind. Of course, if such a system is found, we could feed in dozens of general situations in advance - and if in a tough situation, then after resolving it one way or another, we could feed that situation into the computer and find out for future reference which course of action was correct (that eliminates a lot of the time constraint).
That seems like a reasonable definition; my point is that not everyone uses the same equation.
That's true, the question is, how often is this because people have totally different values, and how often is it that they have extremely similar "ideal equations," but different "approximations" of what they think that equation is. I think for sociopaths, and other people with harmful ego-syntonic mental disorders it's probably the former, but its more often the later for normal people.
Eliezer has argued that it is confusing and misleadin...
In response to a request, I am going to do some basic unpacking of second-order desire, or "metawanting". Basically, a second-order desire or metawant is a desire about a first-order desire.
Example 1: Suppose I am very sleepy, but I want to be alert. My desire to be alert is first-order. Suppose also that there is a can of Mountain Dew handy. I know that Mountain Dew contains caffeine and that caffeine will make me alert. However, I also know that I hate Mountain Dew1. I do not want the Mountain Dew, because I know it is gross. But it would be very convenient for me if I liked Mountain Dew: then I could drink it, and I could get the useful effects of the caffeine, and satisfy my desire for alertness. So I have the following instrumental belief: wanting to drink that can of Mountain Dew would let me be alert. Generally, barring other considerations, I want things that would get me other things I want - I want a job because I want money, I want money because I can use it to buy chocolate, I want chocolate because I can use it to produce pleasant taste sensations, and I just plain want pleasant taste sensations. So, because alertness is something I want, and wanting Mountain Dew would let me get it, I want to want the Mountain Dew.
This example demonstrates a case of a second-order desire about a first-order desire that would be instrumentally useful. But it's also possible to have second-order desires about first-order desires that one simply does or doesn't care to have.
Example 2: Suppose Mimi the Heroin Addict, living up to her unfortunate name, is a heroin addict. Obviously, as a heroin addict, she spends a lot of her time wanting heroin. But this desire is upsetting to her. She wants not to want heroin, and may take actions to stop herself from wanting heroin, such as going through rehab.
One thing that is often said is that what first-order desires you "endorse" on the second level are the ones that are your most true self. This seems like an appealing notion in Mimi's case; I would not want to say that at her heart she just wants heroin and that's an intrinsic, important part of her. But it's not always the case that the second-order desire is the one we most want to identify with the person who has it:
Example 3: Suppose Larry the Closet Homosexual, goodness only knows why his mother would name him that, is a closet homosexual. He has been brought up to believe that homosexuality is gross and wrong. As such, his first-order desire to exchange sexual favors with his friend Ted the Next-Door Neighbor is repulsive to him when he notices it, and he wants desperately not to have this desire.
In this case, I think we're tempted to say that poor Larry is a gay guy who's had an alien second-order desire attached to him via his upbringing, not a natural homophobe whose first-order desires are insidiously eroding his real personality.
A less depressing example to round out the set:
Example 4: Suppose Olivia the Overcoming Bias Reader, whose very prescient mother predicted she would visit this site, is convinced on by Eliezer's arguments about one-boxing in Newcomb's Problem. However, she's pretty sure that if Omega really turned up, boxes in hand, she would want to take both of them. She thinks this reflects an irrationality of hers. She wants to want to one-box.
1Carbonated beverages make my mouth hurt. I have developed a more generalized aversion to them after repeatedly trying to develop a taste for them and experiencing pain every time.