Ghatanathoah comments on Wanting to Want - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (185)
Looking at it, I think that the difference is that Larry the Closet Homosexual probably doesn't really have a second order desire to not be gay. What he has is a second order desire to Do the Right Thing, and mistakenly believes that homosexuality isn't the Right Thing. So we naturally empathize with Larry, because his conflict between his first and second order desires is unnecessary. If he knew that homosexuality wasn't wrong the conflict would disappear, not because his desires had changed, but because he had better knowledge about how to achieve them.
Mimi the Heroin Addict, by contrast, probably doesn't want to want heroin because it obstructs her from obtaining other important life goals that she genuinely wants and approves of. If we were too invent some sort of Heroin 2.0 that lacked most of heroin's negative properties (i.e. removing motivation to achieve your life goals, health problems) Mimi would probably be much less upset about wanting it.
What reasoning process did you use to determine his belief was mistaken? When and where does Larry live? What are his other terminal goals?
In the interests of avoiding introducing complications into the thought experiment, I assumed that Larry was, aside from his sexual orientation, a fairly psychologically normal human who had normal human terminal goals, like an interest in sex and romantic love. I also assumed, again to avoid complications (and from clues in the story) that he probably lived, like most Less Wrong readers and writers, in a First World liberal democracy in the early 21st century.
The reasoning process I used to determine his belief was mistaken was a consequentialist meta-ethic that produces the results "Consensual sex and romance are Good Things unless they seriously interfere with some other really important goal." I assumed that Larry, being a psychologically normal human in a tolerant country, did not have any other important goals they interfered with. He probably either mistakenly believed that a supernatural creature of immense power existed and would be offended by his homosexuality, or mistakenly believed in some logically incoherent deontological set of rules that held that desires for consensual sex and romance somehow stop being Good Things if the object of those desires is of the same sex as the desirer.
Obviously if Larry lived in some intolerant hellhole of a country or time period it might be well to change his orientation to be bisexual or heterosexual so that he could satisfy his terminal goals of Sex and Romance without jeopardizing his terminal goals of Not Being Tortured and Killed. But that would be a second best the solution, the ideal solution would be to convince his fellows that their intolerance was unethical.
PhilGoetz wrote:
I am having trouble seeing a significant difference between that and what you've described. Mimi's enabler could argue "human happiness is a Good Thing unless it seriously interferes with some other really important goal," and then one would have to make the engineering judgment of whether heroin addiction and homosexuality fall on opposite sides of the "serious interference" line. Similarly, the illegality of heroin and the illegality of homosexuality seem similarly comparable; perhaps Mimi should convince her fellows that their intolerance of her behavior is unethical.
Let me try using an extended metaphor to explain my point: Remember Eliezer's essay on the Pebblesorters, the aliens obsessed with sorting pebbles into prime-numbered heaps?
Let's imagine a race of Pebblesorters that's p-morality consists of sorting pebbles into prime-numbered heaps. All Pebblesorters have a second-order desire to sort pebbles into prime-numbered heaps, and ensure that others do so as well. In addition to this, individual Pebblesorters have first order desires that make them favor certain prime numbers more than others when they are sorting.
Now let's suppose there is a population of Pebblesorters who usually favor pebble heaps consisting of 13 pebbles but occasionally a mutant is born that likes to make 11-pebble heaps best of all. However, some of the Pebblesorters who prefer 13-pebble heaps have somehow come to the erroneous conclusion that 11 isn't a prime number. Something, perhaps some weird Pebblesorter versions of pride and self-deception, makes them refuse to admit their error.
The 13-Pebble Favorers become obsessed with making sure no Pebblesorters make heaps of 11 pebbles, since 11 obviously isn't a prime number. They begin to persecute 11-Pebble Favorers and imprison or kill them. They declare that Sortulon Prime, the mighty Pebblesorter God that sorts stars into gigantic prime-numbered constellations in the sky, is horribly offended that some Pebblesorters favor 11 pebble piles and will banish any 11-Pebble Favorers to P-Hell, where they will be forced to sort pebbles into heaps of 8 and 9 for all eternity.
Now let's take a look at an individual Pebblesorter named Larry the Closet 11-Pebble Favorer. He was raised by devout 13-Pebble Favorer parents and brought up to believe that 11 isn't a prime number. He has a second order desire to sort pebbles into prime-numbered heaps, and a first order desire to favor 11-pebble heaps. Larry is stricken by guilt that he wants to make 11-pebble heaps. He knows that 11 isn't a prime number, but still feels a strong first order desire to sort pebbles into heaps of 11. He wishes he didn't have that first order desire, since it obviously conflicts with his second order desire to sort pebbles into prime numbered heaps.
Except, of course, Larry is wrong. 11 is a prime number. His first and second order desires are not in conflict. He just mistakenly thinks they are because his parents raised him to think 11 wasn't a prime number.
Now let's make the metaphor explicit. Sorting pebbles into prime-numbered heaps represents Doing the Right Thing. Favoring 13-pebble heaps represents heterosexuality, favoring 11-pebble heaps represents homosexuality. Heterosexual sex and love and homosexual sex and love are both examples of The Right Thing. The people who think homosexuality is immoral are objectively mistaken about what is and isn't moral, in the same way the 13-Pebble Favorers are objectively mistaken about the primality of the number 11.
So the first and second order desires of Larry the Closet Homosexual and Larry the Closet 11-Pebble Favorer aren't really in conflict. They just think they are because their parents convinced them to believe in falsehoods.
Again, I assumed that Mimi was a psychologically normal human who had normal human second order desires, like having friends and family, being healthy, doing something important with her life, challenging herself, and so on. I assumed she didn't want to use heroin because doing so interfered with her achievement of these important second order desires.
I suppose Mimi could be a mindless hedonist whose second order desires are somehow mistaken about what she really wants, but those weren't the inferences I drew.
Again, recall my mention of a hypothetical Heroin 2.0 in my earlier comment. It seems to me that if Heroin 2.0 was suddenly invented, and Mimi still didn't want to use heroin, even though it no longer seriously interfered with her other important values, that she might be mistaken. Her second order desire might be a cached thought leftover from when she was addicted to Heroin 1.0 and she can safely reject it.
But I will maintain that if Larry and Mimi are fairly psychologically normal humans, that Mimi's second order desire to stop using heroin is an authentic and proper desire, because heroin use seriously interferes with the achievement of important goals and desires that normal humans (like Mimi, presumably) have. Larry's second order desire, by contrast, is mistaken, because it's based on the false belief that homosexuality is immoral. Homosexual desires do not interfere with important goals humans have. Rather, they are an important goal that humans have (love, sex, and romance), it's just that the objective of that goal is a bit unusual (same sex instead of opposite).
EDITED: To change some language that probably sounded too political and judgemental. The edits do not change the core thesis in any way.
We should point people to this whenever they're like "What's special about Less Wrong?" and we can be like "Okay, first, guess how Less Wrong would discuss a reluctant Christian homosexual. Made the prediction? Good, now click this link."
I'm surprised you regarded it so highly. The flaws I noticed are located in a response to Ghatanathoah's comment.
First, I would like to make one thing clear: I have absolutely nothing against homosexuals and in fact qualify as queer because my attractions transcend gender entirely. I call my orientation "sapiosexual" because it is minds that I am sexually attracted to, and good character, never mind the housing.
Stops at "pigheaded jerks"
downvotes
You know where this is going, oh yes, I am going right to fundamental attribution error and political mindkill.
The parents are deemed "pigheaded jerks" - a perception of their personality.
Larry the homosexual, convinced by the exact same reasoning, is given something subtly different - an attack on his behavior -- "he gullibly believed them" and you continue with "They (the Larrys) just think they are because their parents fed them a load of crap." attributing his belief to the situation that Larry is in.
Do you think Larry's grandparents didn't teach Larry's parents the same thing? And that Larry's great grandparents didn't teach it to Larry's grandparents?
This was a "good solid dig" at the other side.
You make an excellent point. I will edit my post to make it sound less political and judgemental.
I am charmed by your polite acknowledgement of the flaw and am happy to see that this has been updated. Thanks for letting me know that pointing it out was useful. :)
I upvoted despite this. If you overlook that one problem, everything else is gold. That single flawed sentence does not effect the awesome of the other 14 paragraphs, as it does not contribute to the conclusion.
My experience of it was more like:
"Oh, this is nice and organized... Still orderly... Still orderly... OHMYSPAGHETTIMONSTER I DID NOT JUST READ THAT!"
To me, it was a disappointment. Like if I were eating ice cream and then it fell to the ground.
If Eliezer is going to praise it like it's the epitome of what LessWrong should be, then it should be spotless. Do you agree?
I think you're looking at this discussion from the wrong angle. The question is, "how do we differentiate first-order wants that trump second-order wants from second-order wants that trump first-order wants?" Here, the order only refers to the psychological location of the desire: to use Freudian terms, the first order desires originate in the id and the second order desires originate in the superego.
In general, that is a complicated and difficult question, which needs to be answered by careful deliberation- the ego weighing the very different desires and deciding how to best satisfy their combination. (That is, I agree with PhilGoetz that there is no easy way to distinguish between them, but I think this is proper, not bothersome.)
Some cases are easier than others- in the case of Sally, who wants to commit suicide but wants to not want to commit suicide, I would generally recommend methods of effective treatment for suicidal tendencies, not the alternative. But you should be able to recognize that the decision could be difficult, at least for some alteration of the parameters, and is the alteration is significant enough it could swing the other way.
There is also another factor which clouds the analysis, which is that the ego has to weigh the costs of altering, suppressing, or foregoing one of the desires. It could be that Larry has a twin brother, Harry, who is not homosexual, and that Harry is genuinely happier that Larry is, and that Larry would genuinely prefer being Harry to being himself; he's not mistaken about his second-order want.
However, the plan to be (or pretend to be) straight is much more costly and less likely to succeed than the plan to stop wanting to be straight, and that difference in costs might be high enough to determine the ego's decision. Again, it should be possible to imagine realistic cases in which the decision would swing the other way. (Related.)
It's also worth considering how much one wants to engage in sour grapes thinking- much of modern moral intuitions about homosexuality seem rooted in the difficulty of changing it. (Note Alicorn's response. Given that homosexuality is immutable, then plans to change homosexuals are unlikely to succeed, and they might as well make the best of their situation. But I hope it's clear that, at its root, this is a statement about engineering reality, not moral principles- if there were a pill that converted homosexuals to heterosexuals, then the question of how society treats homosexuals would actually be different, and if Larry asked you to help him make the decision of whether or not to take the pill, I'm sure you could think of some things to write in the "pro" column for "take the pill" and in the "con" column for "don't take the pill."
Why I said this is worth considering is that, as should be unsurprising, two wants conflict. Often, we don't expect the engineering reality to change. Male homosexuality is likely to be immutable for the lifetimes of the ones that are currently alive, and it's more emotionally satisfying to declare that homosexual desires don't conflict with important goals than reflect on the tradeoffs that homosexuals face that heterosexuals don't. Doing so, however, requires a sort of willful blindness, which may or may not be worth the reward gained by engaging in it.
I don't deny that there may be some good reasons to prefer to be heterosexual. For instance, imagine Larry lives in an area populated by very few homosexual and bisexual men, and moving somewhere else is prohibitively costly for some reason. If this is the case, then Larry may have a rational second-order desire to become bisexual or heterosexual, simply because doing so would make it much easier to find romantic partners.
However, I would maintain that the specific reason given in Alicorn's orignal post for why Larry desires to not be homosexual is that he is confused about the morality of homosexuality and is afraid he is behaving immorally, not because he has two genuine desires that conflict.
I find it illuminating to compare intuitions about homosexuality to intuitions about bisexuality. If homosexual relationships were really inferior to heterosexual ones in some important way then it would make sense to encourage bisexual people to avoid homosexual relationships and focus on heterosexual ones. This seems wrong to me however, if I was giving a bisexual person relationship advice I think the good thing to do would be advise them to focus on whoever is most compatible with them regardless of sex.
I think you are probably right, this is proper. I think I may feel biased in favor of second order desires because right now it seems like in my current life I have difficulty preventing my first order desires from overriding them. But if I think about it, it seems like I have many first order desires I cherish and would really prefer to avoid changing.
While the Freudian description is accurate relative to sources, I struggle to order them. I believe it is an accumulated weighting that makes one thought dominate another. We are indeed born with a great deal of innate behavioral weighting. As we learn, we strengthen some paths and create new paths for new concepts. The original behaviors (fight or flight, etc.) remain.
Based on this known process, I conjecture that experiences have an effect on the weighting of concepts. This weighting sub-utility is a determining factor in how much impact a concept has on our actions. When we discover fire burns our skin, we don't need to repeat the experience very often to weigh fire heavily as something we don't want touching our skin.
If we constantly hear, "blonde people are dumb," each repetition increases the weight of this concept. Upon encountering an intelligent blond named Sandy, the weighting of the concept is decreased and we create a new pattern for "Sandy is intelligent" that attaches to "Sandy is a person" and "Sandy is blonde." If we encounter Sandy frequently, or observe many intelligent blonde people, the weighting of the "blonde people are dumb" concept is continually reduced.
Coincidentally, I believe this is the motivation behind why religious leaders urge their followers to attend services regularly, even if subconsciously. The service maintains or increases weighting on the set of religious concepts, as well as related concepts such as peer pressure, offsetting any weighting loss between services. The depth of conviction to a religion can potentially be correlated with frequency of religious events. But I digress.
Eventually, the impact of the concept "blonde people are dumb" on decisions becomes insignificant. During this time, each encounter strengthens the Sandy pattern or creates new patterns for blondes. At some level of weighting for the "intelligent" and "blonde" concepts associated to people, our brain economizes by creating a "blond people are intelligent" concept. Variations of this basic model is generally how beliefs are created and the weights of beliefs are adjusted.
As with fire, we are extremely averse to incongruity. We have a fundamental drive to integrate our experiences into a cohesive continuum. Something akin to adrenaline is released when we encounter incongruity, driving us to find a way to resolve the conflicting concepts. If we can't find a factual explanation, we rationalize one in order to return to balanced thoughts.
When we make a choice of something over other things, we begin to consider the most heavily weighted concepts that are invoked based on the given situation. We work down the weighting until we reach a point where a single concept outweighs all other competing concepts by an acceptable amount.
In some situations, we don't have to make many comparisons due to the invocation of very heavily weighted concepts, such as when a car is speeding towards us while we're standing in the roadway. In other situations, we make numerous comparisons that yield no clear dominant concept and can only make a decision after expanding our choice of concepts.
This model is consistent with human behavior. It helps to explain why people do what they do. It is important to realize that this model applies no division of concepts into classes. It uses a fluid ordering system. It has transient terminal goals based on perceived situational considerations. Most importantly, it bounds the recursion requirements. As the situation changes, the set of applicable concepts to consider changes, resetting the core algorithm.
From what I've heard, the typical response to believing that blond people are dumb and observing that blond Sandy is intelligent is to believe that Sandy is an exception, but blond people are dumb.
Most people are very attached to their generalizations.
Quite right about attachment. It may take quite a few exceptions before it is no longer an exception. Particularly if the original concept is regularly reinforced by peers or other sources. I would expect exceptions to get a bit more weight because they are novel, but no so much as to offset higher levels of reinforcement.
Your example does an exemplery job of explaining your viewpoint on Larry's situation. To explain the presumed viewpoint of Larry's parents on his situation requires merely a very small change; replacing all occurrances of the number 11 with the number 9.
How do you define objective morality? I've heard of several possible definitions, most of which conflict with each other, so I'm a little curious as to which one you've selected.
I'm not sure I understand you, do you mean that a more precise description of Larry's parent's viewpoint is that the Pebblesorter versions of them think 11 and 9 are the same numbers? Or are you trying to explain how a religious fundamentalist would use the Pebblesorter metaphor if they were making the argument.
I define morality as being a catch-all term to describe what are commonly referred to as the "good things in life," love, fairness, happiness, creativity, people achieving what they want in life, etc. So something is morally good if it tends to increase those things. In other words, "good" and "right" are synonyms. Morality is objective because we can objectively determine whether people are happy, being treated fairly, getting what they want out of life, etc. In Larry's case having a relationship with Ted-the-Next-Door neighbor would be the morally right thing to do because it would increase the amount of love, happiness, people-getting what they want, etc. in the world.
I think the reason that people have such a problem with the idea of objective morality is that they subscribe, knowingly or not, to motivational internalism. That is, they believe that moral knowledge is intrinsically motivating, simply knowing something is right motivates someone to do it. They then conclude that since intrinsically motivating knowledge doesn't seem to exist, that morality must be subjective.
I am a motivational externalist, so I do not buy this argument. I believe that that people are motivated to act morally by our conscience and moral emotions (i.e. compassion, sympathy). If someone has no motivation to act to increase the "good things in life" that doesn't mean morality is subjective, that simply means that they are a bad person. People who lack moral emotions exist in real life, and they seem to lack any desire to act morally at all, unless you threaten to punish them if they don't.
The idea of intrinsically motivating knowledge is pretty scary if you think about it. What if it motivated you to kill people? Or what if it made you worship Darkseid? The Anti-Life equation from Final Crisis works pretty much exactly the way motivational internalists think moral knowledge does, except that instead of motivating people to care about others and treat people well, it instead motivates them to serve evil pagan gods from outer space.
Yes, exactly. Larry's parents' do not believe that they are mistaken, and are not easily proved mistaken.
That's a good definition, and it avoids most of the obvious traps. A bit vague, though. Unfortunately, there is a non-obvious trap; this definition leads to the city of Omelas, where everyone is happy, fulfilled, creative... except for one child, locked in the dark in a cellar, starved; one child on whose suffering the glory of Omelas rests. Saving the child decreases overall happiness, health, achievement of goals, etc., etc. Despite all this, I'd still think that leaving the child locked away in the dark is a wrong thing. (This can also lead to Pascal's Mugging, as an edge case)
In my case, it's because every attempt I've seen at defining an objective morality has potential problems. Given to you by an external source? But that presumes that the external source is not Darkseid. Written in the human psyche? There are some bad things in the dark corners of the human psyche. Take whatever action is most likely to transform the world into a paradise? Doesn't usually work, because we don't know enough to always select the correct actions. Do unto others as you would have them do unto you? That's a very nice one - but not if Bob the Masochist tries to apply it.
Of course, subjective morality is no better - and is often worse (mainly because a society in general can reap certain benefits from a shared idea of morality).
What does seem to work is to pick a society whose inhabitants seem happy and fulfilled, and trying to use whatever rules they use. The trouble with that is that it's kludgy, uncertain, and could often do with improvement (though it's been improved often enough in human history that many - not all, but many - obvious 'improvements' aren't in practice).
Aside from its obvious artificiality, and despite the fact that all our instincts cry out against it, it's not at all clear to me that there are any really good reasons to reject the Omelasian solution. This is of course a fantastically controversial position (just look at the response to Torture vs. Dust Specks, which might be viewed as an updated and reframed version of the central notion of The Ones Who Walk Away From Omelas), but it nonetheless seems to be a more or less straightforward consequence of most versions of consequential ethics.
As a matter of fact, I'm inclined to view Omelas as something between an intuition pump and a full-blown cognitive exploit: a scenario designed to leverage our ethical heuristics (which are well-adapted to small-scale social groups, but rather less well adapted to exotic large-scale social engineering) in order to discredit a viewpoint which should rightfully stand or fall on pragmatic grounds. A tortured child is something that hardly anyone can be expected to think straight through, and trotting one out in full knowledge of this fact in order to make a point upsets me.
Omelas is a cognitive exploit, yes. That's really the point - it forces people to consider how appropriate their heuristics really are. Some people would make Omelas if they could; some wouldn't, for the sake of the one child. A firm preference for either possibility can be controversial, partially because there are good reasons for both states and partially because different ethical heuristics get levered in different directions. (A heuristic that compares the number of people helped vs. the number hurt will pull one way; a heuristic that says "no torture" will pull the other way).
There's a real world analogue to Omelas. The UK (like other countries) has a child protection system, intended to minimize abuse & neglect of children. The state workers (health visitors, social workers, police officers, hospital staff, etc.) who play roles in the system can try to intervene when the apparent risk of harm to a child reaches some threshold.
If the threshold is too low, the system gratuitously interferes with families' autonomy, preventing them from living their lives in peace. If the threshold is too high, the system fails to do its job of preventing guardians from tormenting or killing their children. Realistically, a trade-off is inevitable, and under any politically feasible threshold "some children will die to preserve the freedom of others", as the sociologist Robert Dingwall put it. So the UK's child protection system takes the Omelasian route.
The real life situation is less black & white than Omleas, but it looks like the same basic trade-off in a non-artificial setting. I wonder whether people's intuitions about Omelas align with their intuitions about the real life child protection trade-off (and indeed whether both align with society's revealed collective preference).
Personally, on reading the story, I decided immediately that not only would I not walk away from Omelas (which solves nothing anyway,) I was fully in favor of the building of Omelases, provided that even more efficient methods of producing prosperity were not forthcoming.
The prevention of dust specks may vanish into nothing in my intuitive utility calculations, but it immediately hits me that a single tortured child is peanuts besides the cumulative mass of suffering that goes on in our world all the time. With a hundred thousand dollars or so to the right charity, you could probably prevent a lot more disutility than that of the tortured child. If for that money I could either save one child from that fate, or create a city like Omelas minus the sacrifice, then it seems obvious to me that creating the city is a better bargain.
That's true, but I think that human values are so complex that any attempt to compress morality into one sentence is pretty much obligated to be vague.
One rather obvious rejoinder is that there are currently hundreds, if not thousands of children who are in the same state as the unfortunate Omelasian right now in real life, so reducing the number to just one child would be a huge improvement. But you are right that even one seems too many.
A more robust possibility might be to add "equality" to the list of the "good things in life." If you do that then Omelas might be morally suboptimal because the vast inequality between the child and the rest of the inhabitants might overwhelm the achievement of the other positive values. Now, valuing equality for its own sake might add other problems, but these could probably be avoided if you were sufficiently precise and rigorous in defining equality.
I think the best explanation I've seem is something like the metaethics Eliezer espouses, which is (if I understand them correctly), that morality is a series of internally consistent concepts related to achieving what I called "the goods things in life," and that human beings (those who are not sociopaths anyway) care a lot about these concepts of wellbeing and want to follow and fulfill them.
In other words, morality is like mathematics in some ways, it generates consistent answers(on the topic of people's wellbeing) that are objectively correct. But it is not like the Anti-Life Equation because it is not intrinsically motivating. Humans care about morality because of our consciences and our positive emotions, not because it is universally compelling.
To put it another way, I think that if you were to give a superintelligent paperclipper a detailed description of human moral concepts and offered to help it make some more paperclips if it elucidated these concepts for you, that it would probably generate a lot of morally correct answers. It would feel no motivation to obey these answers of course, since it doesn't care about morality, it cares about making paperclips.
This is a little like morality being "embedded in the human psyche" in the sense that the desire to care about morality is certainly embedded in their somewhere (probably in the part we label "conscience"). But it is also objective in the sense that moral concepts are internally consistent independent of the desires of the mind. To use the Pebblesorter metaphor again, caring about sorting pebbles into prime numbered heaps is "embedded in the Pebblesorter psyche," but which numbers are prime is objective.
That's certainly true, but that simply means that humans are capably of caring about other things besides morality, and these other things that people sometimes care about can be pretty bad. This obviously makes moral reasoning a lot harder, since it's possible that one of your darker urges might be masquerading as a moral judgement. But that just means that moral reasoning is really hard to do, it doesn't mean that it's wrong in principle.
Vague or flawed. Given those options, I think I'd prefer vague.
I agree completely. If I had any idea how Omelas worked, I might be tempted to try seeing if any of those ideas could be used to improve current societies.
Hmmm. To avoid Omelas, equality would have to be fairly heavily weighted; any finite weighting given to equality, however, will simply mean that Omelas is only possible given a sufficiently large population (by balancing the cost of the inequality with the extra happiness of the extra inhabitants).
Personally, I think that valuing equality itself is a good idea, if mixed in with a suitable set of other values; one possible failure mode for overvaluing equality is an equality of wretchedness, a state of "we're all equal because we all have nothing and no hope" (this is counteracted by providing suitable weights for the other good things, like happiness and the freedom to try to achieve goals (but what about the goal of vengeance? For a real slight? An imagined slight?)).
An example of a society failing due to an over-reliance on equality, is France shortly after the French Revolution.
I think that I should read through that entire sequence in the near future. I'd like to see his take on metaethics.
Huh. I think we're defining 'morality' slightly differently here.
My definition of 'morality' would be 'a set of rules, decided by some system, such that one can feed in a given action and (usually) get out whether that action was a good or a bad action'. Implicit in that definition is the idea that two people may disagree on what those rules actually are - that there might be better or worse moralities, and that therefore the answers given by a randomly chosen morality need not be objectively correct.
To take an example; certain ancient cultures may have had the belief that human sacrifice was necessary, on Midwinter's Day, to persuade summer to come back and let the crops grow. In such a culture, strapping someone down and killing them in a particularly painful way may have been considered the right thing to do; and a member of that society would argue for it on the basis that the tribe needs the crops to grow next year (and, if selected, might even walk up voluntarily to be killed). In his morality, these annual deaths are a good thing, because they make the crops grow; in my morality, these annual deaths are a bad thing, and moreover, they don't make the crops grow.
For what it's worth, I do agree with you that getting the result out of a moral system does not, in and of itself, force an intrinsic motivation to follow that course of action; people can be trained from a young age to feel that motivation, and many people are, but there's really no reason to assume that it is always there.
If there is an objectively correct morality, that can apply to all situations, then I don't know what it is - my current system of morality (based heavily on the biblical principal of 'Love thy neighbour') covers many situations, but is not good at the average villain's sadistic choice (where I can save the lives of group A or group B but not both)
Hmmm. A lot of the darkness in the human psyche can be explained in this manner; but I'd think that there are other parts which cannot be explained in this way (when a person goes out of his way to hurt someone that he'll likely never see again, for example). A lot of these, I'd think, are attributable to a lack of empathy; a person who sees other people as non-people (or as Not True People, for some self-including definition of True People).
If you're going to do that, why not just directly use happiness and fulfillment?
I cannot create an entire ethical framework, for everyone to follow, on any basis, and expect that it will be able to hold up for the next thousand years. If I try, I will fail, and this is why: because people cheat. Many intelligent agents will poke at the rules, seeking a possible exploit thereof that enhances their success at the possible expense of their neighbours' success. Over the next thousand years, there will be thousands, probably millions, of such intelligent agents hunting for, and attempting to exploit, flaws in the system; people who stick by the letter of the rule, and avoid the spirit of the rule. I cannot create an entire ethical framework, because I cannot outwit thousands or millions of future peoples' attempts to find and exploit gaps and loopholes in my framework.
Hence, the best that I can do is to find a system that has already endured a period of field testing and that hasn't broken yet; and perhaps attempt a small, incremental improvement (no more) in order to test that improvement.
Define it, or defend it? There are a lot of defences, but not so much definitions.
I think the metaphor misses something important here, because the number of pebbles seems completely arbitrary. What, if anything, would change if in the pebble-sorters' ancestral environment, preferring 13-pebble heaps was adaptive, but preferring 11-pebble heaps (or spending resources on that that do) was not?
Preferring other people like Larry to be homosexual is adaptive for me. And it is the judgement by others (and the implicit avoidance of that through shame) that we are considering here. That said:
Absolutely, and the entire line of reasoning relies on conveying the speaker's own morality ("it is second-order 'right' to be homosexual") on others without making it explicit.
The same reason sorting pebbles into correct heaps was adaptive in the first place.
EDIT: Wait, does it matter that homosexuality is probably not adaptive?
That was the point of my comment. There is a large disanalogy between heterosexuality and 13-pebble heap preference (namely, the first highly adaptive, but the second has no apparent reason to be). Although, I'm not sure if that is enough to break the metaphor.
There are many properties homosexuality has but 11-pebble heap preference don't, and vice versa. Why is evolutionary maladaptiveness worth pointing out, is my question.
Well, if moral norms are the Nash equilibria that result from actual historical bargaining situations (that are determined largely by human nature and the ancestral environment), then it seems somewhat relevant. If moral norms are actually imperative sentences uttered by God, then it seems completely irrelevant. Etc...
I suppose whether or not the pebble-sorting metaphor is good depends on which meta-ethical theory is true. In other words, I'm agreeing with PhilGoetz; Example 2 and Example 3 are only in separate classes of meta-wants assuming a (far from universally shared) moral system.
Incidentally, it's easier to sort pebbles into heaps of 11. The original pebblesorters valued larger heaps, but had a harder time determining their correctness.
That's why I was careful to refer to them as 11-Pebble and 13-Pebble Favorers. They do value other sizes of pebble heaps, 11 and 13 are just the numbers they do most frequently. Or perhaps 11 and 13 are the heaps they like making in their personal time, but they like larger prime numbers for social pebble-sorting endeavors. The point is, I said they "favored" that size because I wanted to make sure that the ease of sorting the piles didn't seem too relevant, since that would distract away from the central metaphor.