Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Ghatanathoah comments on Wanting to Want - Less Wrong

16 Post author: Alicorn 16 May 2009 03:08AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (185)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vaniver 24 October 2012 04:40:23PM 2 points [-]

PhilGoetz wrote:

I suppose you just punt it off to your moral system, or your expected-value computations.

I am having trouble seeing a significant difference between that and what you've described. Mimi's enabler could argue "human happiness is a Good Thing unless it seriously interferes with some other really important goal," and then one would have to make the engineering judgment of whether heroin addiction and homosexuality fall on opposite sides of the "serious interference" line. Similarly, the illegality of heroin and the illegality of homosexuality seem similarly comparable; perhaps Mimi should convince her fellows that their intolerance of her behavior is unethical.

Comment author: Ghatanathoah 25 October 2012 07:30:00AM *  22 points [-]

Let me try using an extended metaphor to explain my point: Remember Eliezer's essay on the Pebblesorters, the aliens obsessed with sorting pebbles into prime-numbered heaps?

Let's imagine a race of Pebblesorters that's p-morality consists of sorting pebbles into prime-numbered heaps. All Pebblesorters have a second-order desire to sort pebbles into prime-numbered heaps, and ensure that others do so as well. In addition to this, individual Pebblesorters have first order desires that make them favor certain prime numbers more than others when they are sorting.

Now let's suppose there is a population of Pebblesorters who usually favor pebble heaps consisting of 13 pebbles but occasionally a mutant is born that likes to make 11-pebble heaps best of all. However, some of the Pebblesorters who prefer 13-pebble heaps have somehow come to the erroneous conclusion that 11 isn't a prime number. Something, perhaps some weird Pebblesorter versions of pride and self-deception, makes them refuse to admit their error.

The 13-Pebble Favorers become obsessed with making sure no Pebblesorters make heaps of 11 pebbles, since 11 obviously isn't a prime number. They begin to persecute 11-Pebble Favorers and imprison or kill them. They declare that Sortulon Prime, the mighty Pebblesorter God that sorts stars into gigantic prime-numbered constellations in the sky, is horribly offended that some Pebblesorters favor 11 pebble piles and will banish any 11-Pebble Favorers to P-Hell, where they will be forced to sort pebbles into heaps of 8 and 9 for all eternity.

Now let's take a look at an individual Pebblesorter named Larry the Closet 11-Pebble Favorer. He was raised by devout 13-Pebble Favorer parents and brought up to believe that 11 isn't a prime number. He has a second order desire to sort pebbles into prime-numbered heaps, and a first order desire to favor 11-pebble heaps. Larry is stricken by guilt that he wants to make 11-pebble heaps. He knows that 11 isn't a prime number, but still feels a strong first order desire to sort pebbles into heaps of 11. He wishes he didn't have that first order desire, since it obviously conflicts with his second order desire to sort pebbles into prime numbered heaps.

Except, of course, Larry is wrong. 11 is a prime number. His first and second order desires are not in conflict. He just mistakenly thinks they are because his parents raised him to think 11 wasn't a prime number.

Now let's make the metaphor explicit. Sorting pebbles into prime-numbered heaps represents Doing the Right Thing. Favoring 13-pebble heaps represents heterosexuality, favoring 11-pebble heaps represents homosexuality. Heterosexual sex and love and homosexual sex and love are both examples of The Right Thing. The people who think homosexuality is immoral are objectively mistaken about what is and isn't moral, in the same way the 13-Pebble Favorers are objectively mistaken about the primality of the number 11.

So the first and second order desires of Larry the Closet Homosexual and Larry the Closet 11-Pebble Favorer aren't really in conflict. They just think they are because their parents convinced them to believe in falsehoods.

I am having trouble seeing a significant difference between that and what you've described. Mimi's enabler could argue "human happiness is a Good Thing unless it seriously interferes with some other really important goal," and then one would have to make the engineering judgment of whether heroin addiction and homosexuality fall on opposite sides of the "serious interference" line.

Again, I assumed that Mimi was a psychologically normal human who had normal human second order desires, like having friends and family, being healthy, doing something important with her life, challenging herself, and so on. I assumed she didn't want to use heroin because doing so interfered with her achievement of these important second order desires.

I suppose Mimi could be a mindless hedonist whose second order desires are somehow mistaken about what she really wants, but those weren't the inferences I drew.

Mimi's enabler could argue "human happiness is a Good Thing unless it seriously interferes with some other really important goal,"

Again, recall my mention of a hypothetical Heroin 2.0 in my earlier comment. It seems to me that if Heroin 2.0 was suddenly invented, and Mimi still didn't want to use heroin, even though it no longer seriously interfered with her other important values, that she might be mistaken. Her second order desire might be a cached thought leftover from when she was addicted to Heroin 1.0 and she can safely reject it.

But I will maintain that if Larry and Mimi are fairly psychologically normal humans, that Mimi's second order desire to stop using heroin is an authentic and proper desire, because heroin use seriously interferes with the achievement of important goals and desires that normal humans (like Mimi, presumably) have. Larry's second order desire, by contrast, is mistaken, because it's based on the false belief that homosexuality is immoral. Homosexual desires do not interfere with important goals humans have. Rather, they are an important goal that humans have (love, sex, and romance), it's just that the objective of that goal is a bit unusual (same sex instead of opposite).

EDITED: To change some language that probably sounded too political and judgemental. The edits do not change the core thesis in any way.

Comment author: Eliezer_Yudkowsky 26 October 2012 04:28:09AM 9 points [-]

We should point people to this whenever they're like "What's special about Less Wrong?" and we can be like "Okay, first, guess how Less Wrong would discuss a reluctant Christian homosexual. Made the prediction? Good, now click this link."

Comment author: Epiphany 26 October 2012 07:21:48AM *  -2 points [-]

I'm surprised you regarded it so highly. The flaws I noticed are located in a response to Ghatanathoah's comment.

Comment author: Epiphany 26 October 2012 07:09:12AM *  6 points [-]

First, I would like to make one thing clear: I have absolutely nothing against homosexuals and in fact qualify as queer because my attractions transcend gender entirely. I call my orientation "sapiosexual" because it is minds that I am sexually attracted to, and good character, never mind the housing.

Stops at "pigheaded jerks"

downvotes

You know where this is going, oh yes, I am going right to fundamental attribution error and political mindkill.

The parents are deemed "pigheaded jerks" - a perception of their personality.

Larry the homosexual, convinced by the exact same reasoning, is given something subtly different - an attack on his behavior -- "he gullibly believed them" and you continue with "They (the Larrys) just think they are because their parents fed them a load of crap." attributing his belief to the situation that Larry is in.

Do you think Larry's grandparents didn't teach Larry's parents the same thing? And that Larry's great grandparents didn't teach it to Larry's grandparents?

This was a "good solid dig" at the other side.

Comment author: Ghatanathoah 26 October 2012 09:48:37AM 4 points [-]

You make an excellent point. I will edit my post to make it sound less political and judgemental.

Comment author: Epiphany 27 October 2012 12:31:28AM *  2 points [-]

I am charmed by your polite acknowledgement of the flaw and am happy to see that this has been updated. Thanks for letting me know that pointing it out was useful. :)

Comment author: MugaSofer 26 October 2012 08:33:00AM *  4 points [-]

I upvoted despite this. If you overlook that one problem, everything else is gold. That single flawed sentence does not effect the awesome of the other 14 paragraphs, as it does not contribute to the conclusion.

Comment author: Epiphany 27 October 2012 12:25:32AM *  -2 points [-]

My experience of it was more like:

"Oh, this is nice and organized... Still orderly... Still orderly... OHMYSPAGHETTIMONSTER I DID NOT JUST READ THAT!"

To me, it was a disappointment. Like if I were eating ice cream and then it fell to the ground.

If Eliezer is going to praise it like it's the epitome of what LessWrong should be, then it should be spotless. Do you agree?

Comment author: Vaniver 25 October 2012 10:59:51PM 4 points [-]

I think you're looking at this discussion from the wrong angle. The question is, "how do we differentiate first-order wants that trump second-order wants from second-order wants that trump first-order wants?" Here, the order only refers to the psychological location of the desire: to use Freudian terms, the first order desires originate in the id and the second order desires originate in the superego.

In general, that is a complicated and difficult question, which needs to be answered by careful deliberation- the ego weighing the very different desires and deciding how to best satisfy their combination. (That is, I agree with PhilGoetz that there is no easy way to distinguish between them, but I think this is proper, not bothersome.)

Some cases are easier than others- in the case of Sally, who wants to commit suicide but wants to not want to commit suicide, I would generally recommend methods of effective treatment for suicidal tendencies, not the alternative. But you should be able to recognize that the decision could be difficult, at least for some alteration of the parameters, and is the alteration is significant enough it could swing the other way.

There is also another factor which clouds the analysis, which is that the ego has to weigh the costs of altering, suppressing, or foregoing one of the desires. It could be that Larry has a twin brother, Harry, who is not homosexual, and that Harry is genuinely happier that Larry is, and that Larry would genuinely prefer being Harry to being himself; he's not mistaken about his second-order want.

However, the plan to be (or pretend to be) straight is much more costly and less likely to succeed than the plan to stop wanting to be straight, and that difference in costs might be high enough to determine the ego's decision. Again, it should be possible to imagine realistic cases in which the decision would swing the other way. (Related.)

It's also worth considering how much one wants to engage in sour grapes thinking- much of modern moral intuitions about homosexuality seem rooted in the difficulty of changing it. (Note Alicorn's response. Given that homosexuality is immutable, then plans to change homosexuals are unlikely to succeed, and they might as well make the best of their situation. But I hope it's clear that, at its root, this is a statement about engineering reality, not moral principles- if there were a pill that converted homosexuals to heterosexuals, then the question of how society treats homosexuals would actually be different, and if Larry asked you to help him make the decision of whether or not to take the pill, I'm sure you could think of some things to write in the "pro" column for "take the pill" and in the "con" column for "don't take the pill."

Why I said this is worth considering is that, as should be unsurprising, two wants conflict. Often, we don't expect the engineering reality to change. Male homosexuality is likely to be immutable for the lifetimes of the ones that are currently alive, and it's more emotionally satisfying to declare that homosexual desires don't conflict with important goals than reflect on the tradeoffs that homosexuals face that heterosexuals don't. Doing so, however, requires a sort of willful blindness, which may or may not be worth the reward gained by engaging in it.

Comment author: Ghatanathoah 26 October 2012 12:29:45AM 5 points [-]

if there were a pill that converted homosexuals to heterosexuals, then the question of how society treats homosexuals would actually be different, and if Larry asked you to help him make the decision of whether or not to take the pill, I'm sure you could think of some things to write in the "pro" column for "take the pill" and in the "con" column for "don't take the pill."

I don't deny that there may be some good reasons to prefer to be heterosexual. For instance, imagine Larry lives in an area populated by very few homosexual and bisexual men, and moving somewhere else is prohibitively costly for some reason. If this is the case, then Larry may have a rational second-order desire to become bisexual or heterosexual, simply because doing so would make it much easier to find romantic partners.

However, I would maintain that the specific reason given in Alicorn's orignal post for why Larry desires to not be homosexual is that he is confused about the morality of homosexuality and is afraid he is behaving immorally, not because he has two genuine desires that conflict.

It's also worth considering how much one wants to engage in sour grapes thinking- much of modern moral intuitions about homosexuality seem rooted in the difficulty of changing it.

I find it illuminating to compare intuitions about homosexuality to intuitions about bisexuality. If homosexual relationships were really inferior to heterosexual ones in some important way then it would make sense to encourage bisexual people to avoid homosexual relationships and focus on heterosexual ones. This seems wrong to me however, if I was giving a bisexual person relationship advice I think the good thing to do would be advise them to focus on whoever is most compatible with them regardless of sex.

In general, that is a complicated and difficult question, which needs to be answered by careful deliberation- the ego weighing the very different desires and deciding how to best satisfy their combination. (That is, I agree with PhilGoetz that there is no easy way to distinguish between them, but I think this is proper, not bothersome.)

I think you are probably right, this is proper. I think I may feel biased in favor of second order desires because right now it seems like in my current life I have difficulty preventing my first order desires from overriding them. But if I think about it, it seems like I have many first order desires I cherish and would really prefer to avoid changing.

Comment author: JaySwartz 28 November 2012 09:33:44PM -1 points [-]

While the Freudian description is accurate relative to sources, I struggle to order them. I believe it is an accumulated weighting that makes one thought dominate another. We are indeed born with a great deal of innate behavioral weighting. As we learn, we strengthen some paths and create new paths for new concepts. The original behaviors (fight or flight, etc.) remain.

Based on this known process, I conjecture that experiences have an effect on the weighting of concepts. This weighting sub-utility is a determining factor in how much impact a concept has on our actions. When we discover fire burns our skin, we don't need to repeat the experience very often to weigh fire heavily as something we don't want touching our skin.

If we constantly hear, "blonde people are dumb," each repetition increases the weight of this concept. Upon encountering an intelligent blond named Sandy, the weighting of the concept is decreased and we create a new pattern for "Sandy is intelligent" that attaches to "Sandy is a person" and "Sandy is blonde." If we encounter Sandy frequently, or observe many intelligent blonde people, the weighting of the "blonde people are dumb" concept is continually reduced.

Coincidentally, I believe this is the motivation behind why religious leaders urge their followers to attend services regularly, even if subconsciously. The service maintains or increases weighting on the set of religious concepts, as well as related concepts such as peer pressure, offsetting any weighting loss between services. The depth of conviction to a religion can potentially be correlated with frequency of religious events. But I digress.

Eventually, the impact of the concept "blonde people are dumb" on decisions becomes insignificant. During this time, each encounter strengthens the Sandy pattern or creates new patterns for blondes. At some level of weighting for the "intelligent" and "blonde" concepts associated to people, our brain economizes by creating a "blond people are intelligent" concept. Variations of this basic model is generally how beliefs are created and the weights of beliefs are adjusted.

As with fire, we are extremely averse to incongruity. We have a fundamental drive to integrate our experiences into a cohesive continuum. Something akin to adrenaline is released when we encounter incongruity, driving us to find a way to resolve the conflicting concepts. If we can't find a factual explanation, we rationalize one in order to return to balanced thoughts.

When we make a choice of something over other things, we begin to consider the most heavily weighted concepts that are invoked based on the given situation. We work down the weighting until we reach a point where a single concept outweighs all other competing concepts by an acceptable amount.

In some situations, we don't have to make many comparisons due to the invocation of very heavily weighted concepts, such as when a car is speeding towards us while we're standing in the roadway. In other situations, we make numerous comparisons that yield no clear dominant concept and can only make a decision after expanding our choice of concepts.

This model is consistent with human behavior. It helps to explain why people do what they do. It is important to realize that this model applies no division of concepts into classes. It uses a fluid ordering system. It has transient terminal goals based on perceived situational considerations. Most importantly, it bounds the recursion requirements. As the situation changes, the set of applicable concepts to consider changes, resetting the core algorithm.

Comment author: NancyLebovitz 28 November 2012 10:45:14PM 2 points [-]

From what I've heard, the typical response to believing that blond people are dumb and observing that blond Sandy is intelligent is to believe that Sandy is an exception, but blond people are dumb.

Most people are very attached to their generalizations.

Comment author: JaySwartz 29 November 2012 03:52:23PM -1 points [-]

Quite right about attachment. It may take quite a few exceptions before it is no longer an exception. Particularly if the original concept is regularly reinforced by peers or other sources. I would expect exceptions to get a bit more weight because they are novel, but no so much as to offset higher levels of reinforcement.

Comment author: CCC 26 October 2012 07:38:56AM 2 points [-]

Your example does an exemplery job of explaining your viewpoint on Larry's situation. To explain the presumed viewpoint of Larry's parents on his situation requires merely a very small change; replacing all occurrances of the number 11 with the number 9.

The people who think homosexuality is immoral are objectively mistaken about what is and isn't moral, in the same way the 13-Pebble Favorers are objectively mistaken about the primality of the number 11.

How do you define objective morality? I've heard of several possible definitions, most of which conflict with each other, so I'm a little curious as to which one you've selected.

Comment author: Ghatanathoah 26 October 2012 09:37:17AM *  2 points [-]

To explain the presumed viewpoint of Larry's parents on his situation requires merely a very small change; replacing all occurrances of the number 11 with the number 9.

I'm not sure I understand you, do you mean that a more precise description of Larry's parent's viewpoint is that the Pebblesorter versions of them think 11 and 9 are the same numbers? Or are you trying to explain how a religious fundamentalist would use the Pebblesorter metaphor if they were making the argument.

How do you define objective morality?

I define morality as being a catch-all term to describe what are commonly referred to as the "good things in life," love, fairness, happiness, creativity, people achieving what they want in life, etc. So something is morally good if it tends to increase those things. In other words, "good" and "right" are synonyms. Morality is objective because we can objectively determine whether people are happy, being treated fairly, getting what they want out of life, etc. In Larry's case having a relationship with Ted-the-Next-Door neighbor would be the morally right thing to do because it would increase the amount of love, happiness, people-getting what they want, etc. in the world.

I think the reason that people have such a problem with the idea of objective morality is that they subscribe, knowingly or not, to motivational internalism. That is, they believe that moral knowledge is intrinsically motivating, simply knowing something is right motivates someone to do it. They then conclude that since intrinsically motivating knowledge doesn't seem to exist, that morality must be subjective.

I am a motivational externalist, so I do not buy this argument. I believe that that people are motivated to act morally by our conscience and moral emotions (i.e. compassion, sympathy). If someone has no motivation to act to increase the "good things in life" that doesn't mean morality is subjective, that simply means that they are a bad person. People who lack moral emotions exist in real life, and they seem to lack any desire to act morally at all, unless you threaten to punish them if they don't.

The idea of intrinsically motivating knowledge is pretty scary if you think about it. What if it motivated you to kill people? Or what if it made you worship Darkseid? The Anti-Life equation from Final Crisis works pretty much exactly the way motivational internalists think moral knowledge does, except that instead of motivating people to care about others and treat people well, it instead motivates them to serve evil pagan gods from outer space.

Comment author: CCC 26 October 2012 10:59:28AM 4 points [-]

Or are you trying to explain how a religious fundamentalist would use the Pebblesorter metaphor if they were making the argument.

Yes, exactly. Larry's parents' do not believe that they are mistaken, and are not easily proved mistaken.

I define morality as being a catch-all term to describe what are commonly referred to as the "good things in life," love, fairness, happiness, creativity, people achieving what they want in life, etc. So something is morally good if it tends to increase those things.

That's a good definition, and it avoids most of the obvious traps. A bit vague, though. Unfortunately, there is a non-obvious trap; this definition leads to the city of Omelas, where everyone is happy, fulfilled, creative... except for one child, locked in the dark in a cellar, starved; one child on whose suffering the glory of Omelas rests. Saving the child decreases overall happiness, health, achievement of goals, etc., etc. Despite all this, I'd still think that leaving the child locked away in the dark is a wrong thing. (This can also lead to Pascal's Mugging, as an edge case)

I think the reason that people have such a problem with the idea of objective morality is that they subscribe, knowingly or not, to motivational internalism.

In my case, it's because every attempt I've seen at defining an objective morality has potential problems. Given to you by an external source? But that presumes that the external source is not Darkseid. Written in the human psyche? There are some bad things in the dark corners of the human psyche. Take whatever action is most likely to transform the world into a paradise? Doesn't usually work, because we don't know enough to always select the correct actions. Do unto others as you would have them do unto you? That's a very nice one - but not if Bob the Masochist tries to apply it.

Of course, subjective morality is no better - and is often worse (mainly because a society in general can reap certain benefits from a shared idea of morality).

What does seem to work is to pick a society whose inhabitants seem happy and fulfilled, and trying to use whatever rules they use. The trouble with that is that it's kludgy, uncertain, and could often do with improvement (though it's been improved often enough in human history that many - not all, but many - obvious 'improvements' aren't in practice).

Comment author: Nornagest 27 October 2012 12:46:21AM *  5 points [-]

Unfortunately, there is a non-obvious trap; this definition leads to the city of Omelas, where everyone is happy, fulfilled, creative... except for one child, locked in the dark in a cellar, starved; one child on whose suffering the glory of Omelas rests. Saving the child decreases overall happiness, health, achievement of goals, etc., etc. Despite all this, I'd still think that leaving the child locked away in the dark is a wrong thing.

Aside from its obvious artificiality, and despite the fact that all our instincts cry out against it, it's not at all clear to me that there are any really good reasons to reject the Omelasian solution. This is of course a fantastically controversial position (just look at the response to Torture vs. Dust Specks, which might be viewed as an updated and reframed version of the central notion of The Ones Who Walk Away From Omelas), but it nonetheless seems to be a more or less straightforward consequence of most versions of consequential ethics.

As a matter of fact, I'm inclined to view Omelas as something between an intuition pump and a full-blown cognitive exploit: a scenario designed to leverage our ethical heuristics (which are well-adapted to small-scale social groups, but rather less well adapted to exotic large-scale social engineering) in order to discredit a viewpoint which should rightfully stand or fall on pragmatic grounds. A tortured child is something that hardly anyone can be expected to think straight through, and trotting one out in full knowledge of this fact in order to make a point upsets me.

Comment author: CCC 28 October 2012 02:35:57PM *  3 points [-]

Omelas is a cognitive exploit, yes. That's really the point - it forces people to consider how appropriate their heuristics really are. Some people would make Omelas if they could; some wouldn't, for the sake of the one child. A firm preference for either possibility can be controversial, partially because there are good reasons for both states and partially because different ethical heuristics get levered in different directions. (A heuristic that compares the number of people helped vs. the number hurt will pull one way; a heuristic that says "no torture" will pull the other way).

Comment author: satt 03 November 2012 06:56:38PM 3 points [-]

Aside from its obvious artificiality, and despite the fact that all our instincts cry out against it, it's not at all clear to me that there are any really good reasons to reject the Omelasian solution.

There's a real world analogue to Omelas. The UK (like other countries) has a child protection system, intended to minimize abuse & neglect of children. The state workers (health visitors, social workers, police officers, hospital staff, etc.) who play roles in the system can try to intervene when the apparent risk of harm to a child reaches some threshold.

If the threshold is too low, the system gratuitously interferes with families' autonomy, preventing them from living their lives in peace. If the threshold is too high, the system fails to do its job of preventing guardians from tormenting or killing their children. Realistically, a trade-off is inevitable, and under any politically feasible threshold "some children will die to preserve the freedom of others", as the sociologist Robert Dingwall put it. So the UK's child protection system takes the Omelasian route.

The real life situation is less black & white than Omleas, but it looks like the same basic trade-off in a non-artificial setting. I wonder whether people's intuitions about Omelas align with their intuitions about the real life child protection trade-off (and indeed whether both align with society's revealed collective preference).

Comment author: Nornagest 03 November 2012 08:32:58PM *  3 points [-]

I'd agree that these are consequentially similar, but I don't think they're psychologically similar at all. There's an element of exploitation in Omelas that isn't present in social services: state workers are positioned as protecting children from an evil unrelated to the state, while Omelas is cast as willfully perpetrating an evil in order to ensure its own prosperity. People tend to think of moral culpability in terms of blame, and although some blame might attach itself to social workers for failing to stop abuses that they might prevent with more intrusive intervention thresholds, it's much diluted by the vastly more viscerally appalling culpability carried by actual abusers. Omelas offers no subjects for condemnation other than the state apparatus and the citizens supporting it.

On top of that, intrusive child protection services have very salient failings: most parents (and most children) would find government intrusion into their family lives extremely unpleasant, unpleasant enough to fear and take political action to avoid. Meanwhile, the consequences of no longer torturing Omelas' sacrificial lamb are unspecified and thus about as far from salient (re-entrant?) as it's possible to get. Even in a hypothetical fully specified Omelas where we could point to a chain of effects, I'd expect that chain to be a lot longer and harder to follow, and its endpoints hence less emotionally weighty.

Comment author: Multiheaded 20 November 2012 11:10:55AM *  -1 points [-]

There's an element of exploitation in Omelas that isn't present in social services: state workers are positioned as protecting children from an evil unrelated to the state, while Omelas is cast as willfully perpetrating an evil in order to ensure its own prosperity....

...Omelas offers no subjects for condemnation other than the state apparatus and the citizens supporting it.

Link to me mentioning both Omelas and another "eternally tortured child" short story, SCP-231 (potentially highly distressing so I'm not hotlinking it), as an intuition pump against Mencius Moldbug's "Patchwork" proposal (the "strong"/"total" vision of Patchwork, with absolute security of sovereigns) along very similar lines of analogy, over in the Unqualified Reservations comments.

Comment author: [deleted] 21 November 2012 04:19:12PM 3 points [-]

Disagree SCP-231 is bad source of intuition because it is crafted to be torture & horror porn.

Comment author: TheOtherDave 03 November 2012 07:48:08PM 0 points [-]

I wonder whether people's intuitions about Omelas align with their intuitions about the real life child protection trade-off

It would surprise me if they did, given that Omelas was constructed as an intuition pump.

Comment author: Desrtopa 27 October 2012 01:15:06AM 3 points [-]

Personally, on reading the story, I decided immediately that not only would I not walk away from Omelas (which solves nothing anyway,) I was fully in favor of the building of Omelases, provided that even more efficient methods of producing prosperity were not forthcoming.

The prevention of dust specks may vanish into nothing in my intuitive utility calculations, but it immediately hits me that a single tortured child is peanuts besides the cumulative mass of suffering that goes on in our world all the time. With a hundred thousand dollars or so to the right charity, you could probably prevent a lot more disutility than that of the tortured child. If for that money I could either save one child from that fate, or create a city like Omelas minus the sacrifice, then it seems obvious to me that creating the city is a better bargain.

Comment author: Ghatanathoah 26 October 2012 12:39:00PM 2 points [-]

A bit vague, though.

That's true, but I think that human values are so complex that any attempt to compress morality into one sentence is pretty much obligated to be vague.

Unfortunately, there is a non-obvious trap; this definition leads to the city of Omelas, where everyone is happy, fulfilled, creative... except for one child, locked in the dark in a cellar, starved; one child on whose suffering the glory of Omelas rests.

One rather obvious rejoinder is that there are currently hundreds, if not thousands of children who are in the same state as the unfortunate Omelasian right now in real life, so reducing the number to just one child would be a huge improvement. But you are right that even one seems too many.

A more robust possibility might be to add "equality" to the list of the "good things in life." If you do that then Omelas might be morally suboptimal because the vast inequality between the child and the rest of the inhabitants might overwhelm the achievement of the other positive values. Now, valuing equality for its own sake might add other problems, but these could probably be avoided if you were sufficiently precise and rigorous in defining equality.

In my case, it's because every attempt I've seen at defining an objective morality has potential problems. Given to you by an external source? But that presumes that the external source is not Darkseid. Written in the human psyche?

I think the best explanation I've seem is something like the metaethics Eliezer espouses, which is (if I understand them correctly), that morality is a series of internally consistent concepts related to achieving what I called "the goods things in life," and that human beings (those who are not sociopaths anyway) care a lot about these concepts of wellbeing and want to follow and fulfill them.

In other words, morality is like mathematics in some ways, it generates consistent answers(on the topic of people's wellbeing) that are objectively correct. But it is not like the Anti-Life Equation because it is not intrinsically motivating. Humans care about morality because of our consciences and our positive emotions, not because it is universally compelling.

To put it another way, I think that if you were to give a superintelligent paperclipper a detailed description of human moral concepts and offered to help it make some more paperclips if it elucidated these concepts for you, that it would probably generate a lot of morally correct answers. It would feel no motivation to obey these answers of course, since it doesn't care about morality, it cares about making paperclips.

This is a little like morality being "embedded in the human psyche" in the sense that the desire to care about morality is certainly embedded in their somewhere (probably in the part we label "conscience"). But it is also objective in the sense that moral concepts are internally consistent independent of the desires of the mind. To use the Pebblesorter metaphor again, caring about sorting pebbles into prime numbered heaps is "embedded in the Pebblesorter psyche," but which numbers are prime is objective.

There are some bad things in the dark corners of the human psyche.

That's certainly true, but that simply means that humans are capably of caring about other things besides morality, and these other things that people sometimes care about can be pretty bad. This obviously makes moral reasoning a lot harder, since it's possible that one of your darker urges might be masquerading as a moral judgement. But that just means that moral reasoning is really hard to do, it doesn't mean that it's wrong in principle.

Comment author: CCC 26 October 2012 01:49:28PM 1 point [-]

That's true, but I think that human values are so complex that any attempt to compress morality into one sentence is pretty much obligated to be vague.

Vague or flawed. Given those options, I think I'd prefer vague.

One rather obvious rejoinder is that there are currently hundreds, if not thousands of children who are in the same state as the unfortunate Omelasian right now in real life, so reducing the number to just one child would be a huge improvement. But you are right that even one seems too many.

I agree completely. If I had any idea how Omelas worked, I might be tempted to try seeing if any of those ideas could be used to improve current societies.

A more robust possibility might be to add "equality" to the list of the "good things in life." If you do that then Omelas might be morally suboptimal because the vast inequality between the child and the rest of the inhabitants might overwhelm the achievement of the other positive values. Now, valuing equality for its own sake might add other problems, but these could probably be avoided if you were sufficiently precise and rigorous in defining equality.

Hmmm. To avoid Omelas, equality would have to be fairly heavily weighted; any finite weighting given to equality, however, will simply mean that Omelas is only possible given a sufficiently large population (by balancing the cost of the inequality with the extra happiness of the extra inhabitants).

Personally, I think that valuing equality itself is a good idea, if mixed in with a suitable set of other values; one possible failure mode for overvaluing equality is an equality of wretchedness, a state of "we're all equal because we all have nothing and no hope" (this is counteracted by providing suitable weights for the other good things, like happiness and the freedom to try to achieve goals (but what about the goal of vengeance? For a real slight? An imagined slight?)).

An example of a society failing due to an over-reliance on equality, is France shortly after the French Revolution.

I think the best explanation I've seem is something like the metaethics Eliezer espouses

I think that I should read through that entire sequence in the near future. I'd like to see his take on metaethics.

In other words, morality is like mathematics in some ways, it generates consistent answers(on the topic of people's wellbeing) that are objectively correct.

Huh. I think we're defining 'morality' slightly differently here.

My definition of 'morality' would be 'a set of rules, decided by some system, such that one can feed in a given action and (usually) get out whether that action was a good or a bad action'. Implicit in that definition is the idea that two people may disagree on what those rules actually are - that there might be better or worse moralities, and that therefore the answers given by a randomly chosen morality need not be objectively correct.

To take an example; certain ancient cultures may have had the belief that human sacrifice was necessary, on Midwinter's Day, to persuade summer to come back and let the crops grow. In such a culture, strapping someone down and killing them in a particularly painful way may have been considered the right thing to do; and a member of that society would argue for it on the basis that the tribe needs the crops to grow next year (and, if selected, might even walk up voluntarily to be killed). In his morality, these annual deaths are a good thing, because they make the crops grow; in my morality, these annual deaths are a bad thing, and moreover, they don't make the crops grow.

For what it's worth, I do agree with you that getting the result out of a moral system does not, in and of itself, force an intrinsic motivation to follow that course of action; people can be trained from a young age to feel that motivation, and many people are, but there's really no reason to assume that it is always there.

If there is an objectively correct morality, that can apply to all situations, then I don't know what it is - my current system of morality (based heavily on the biblical principal of 'Love thy neighbour') covers many situations, but is not good at the average villain's sadistic choice (where I can save the lives of group A or group B but not both)

There are some bad things in the dark corners of the human psyche.

That's certainly true, but that simply means that humans are capably of caring about other things besides morality, and these other things that people sometimes care about can be pretty bad.

Hmmm. A lot of the darkness in the human psyche can be explained in this manner; but I'd think that there are other parts which cannot be explained in this way (when a person goes out of his way to hurt someone that he'll likely never see again, for example). A lot of these, I'd think, are attributable to a lack of empathy; a person who sees other people as non-people (or as Not True People, for some self-including definition of True People).

Comment author: Ghatanathoah 26 October 2012 02:53:57PM 3 points [-]

To avoid Omelas, equality would have to be fairly heavily weighted

I think a possible solution would be to have equality and the other values have diminishing returns relative to each other. So in a society with a lot of other good things there is a great obligation to increase equality, whereas in a society with lots of suffering people it's more important to do whatever it takes raise the general level of happiness and not worry as much about equality. So a place as wondrous as Omelas would have a great obligation to help the child.

one possible failure mode for overvaluing equality is an equality of wretchedness, a state of "we're all equal because we all have nothing and no hope"

I think one possible way to frame equality to avoid this is to imagine, metaphorically, that positive things give a society "morality points" and negative things give it "negative morality points." Then have it so that a positive deed that also decreases equality gets "extra points," while a negative deed that also exasperates inequality gets "extra negative points." So in other words, helping the rich isn't bad, it's just much less good than helping the poor.

This also avoids another failure mode: Imagine an action that hurts every single person in the world, and hurts the rich 10 times as much as it hurts the poor. Such an action would increase equality, but praising it seems insane. Under the system I proposed such an action would still count as "bad," though it would be a bit less bad than a bad action that also increased inequality.

Huh. I think we're defining 'morality' slightly differently here.

My definition of 'morality' would be 'a set of rules, decided by some system, such that one can feed in a given action and (usually) get out whether that action was a good or a bad action'.

I don't think that's that different from what I'm saying, I may be explaining it poorly. I do think that morality is essentially like a set of rules or an equation that one uses to evaluate actions. And I consider it objective in that the same equation should produce the same result each time an identical action is fed into it, regardless of what entity is doing the feeding. Then it is up to our moral emotions to motivate us to take actions the equation would label as "good."

Describing it like that sounds a bit clinical though, so I'd like to emphasize that moral rules and equations, are ultimately about people's wellbeing and increasing the good things in life. If you feed an action that improves these values into a rule-set and it comes out labelled "bad" then those rules probably don't even deserve to be called morality, they are some other completely different concept.

Implicit in that definition is the idea that two people may disagree on what those rules actually are - that there might be better or worse moralities, and that therefore the answers given by a randomly chosen morality need not be objectively correct.

This relates to Eliezer's metaethics again, he basically describes morality as an equation or "fixed computation" related to wellbeing that is so complex that it's impossible to wrap your mind about it, so you have to work in approximations. So what you would label a "better" morality is one that more closely resembles the "ideal equation."

To take an example; certain ancient cultures may have had the belief that human sacrifice was necessary, on Midwinter's Day, to persuade summer to come back and let the crops grow.

It seems to me that this is more a disagreement about certain facts of nature than about morality per se. It seems to me that if there really was some sort of malevolent supernatural entity that wouldn't let summer come unless you made sacrifices to it, and it was impossible to stop such an entity, that sacrificing to it might be the only option left. If the choice is "everyone dies of starvation" vs. "one person dies from being sacrificed, everyone else lives" it seems like any worthwhile set of moral rules would label the second option as the better one (though it would not be nearly as good as somehow stopping the entity). The reason that sacrificing people is bad is because such entities do not exist, so such a sacrifice tortures someone, but doesn't save anyone elses' life.

If there is an objectively correct morality, that can apply to all situations, then I don't know what it is

I think the problem is that an objectively correct set of moral rules that could perfectly evaluate any situation would be so complicated no one would be able to use it effectively. Even if we obtained such a system we would have to use crude approximations until we managed to get a supercomputer big enough to do the calculations in a timely manner.

A lot of these, I'd think, are attributable to a lack of empathy; a person who sees other people as non-people

I count empathy as one of the "moral emotions" that motivates people to act morally. So a lack of empathy would be a type of lack of motivation towards moral behavior.

Comment author: CCC 28 October 2012 03:05:01PM 0 points [-]

I think a possible solution would be to have equality and the other values have diminishing returns relative to each other.

That seems to work very well. So the ethical weight of a factor can be proportional to the reciprocal thereof (perhaps with a sign change). Then, for any number of people, there is a maximum happiness-factor that the equation can produce.

So. This can be used to make an equation that makes Omelas bad for any sized population. But not everyone agrees that Omelas is bad in the first place; so is that necessarily an improvement to your ethical equation?

I think one possible way to frame equality to avoid this is to imagine, metaphorically, that positive things give a society "morality points" and negative things give it "negative morality points." Then have it so that a positive deed that also decreases equality gets "extra points," while a negative deed that also exasperates inequality gets "extra negative points." So in other words, helping the rich isn't bad, it's just much less good than helping the poor.

This also avoids another failure mode: Imagine an action that hurts every single person in the world, and hurts the rich 10 times as much as it hurts the poor. Such an action would increase equality, but praising it seems insane. Under the system I proposed such an action would still count as "bad," though it would be a bit less bad than a bad action that also increased inequality.

That failure mode can also be dealt with by combining equality with other factors, such as not being hurt. (The relative weightings assigned to these factors would be important, of course).

I don't think that's that different from what I'm saying, I may be explaining it poorly. I do think that morality is essentially like a set of rules or an equation that one uses to evaluate actions. And I consider it objective in that the same equation should produce the same result each time an identical action is fed into it, regardless of what entity is doing the feeding. Then it is up to our moral emotions to motivate us to take actions the equation would label as "good."

That seems like a reasonable definition; my point is that not everyone uses the same equation.

It seems to me that this is more a disagreement about certain facts of nature than about morality per se.

Hmmm. You're right - that was a bad example. (I don't know if you're familiar with the Chanur series, by C. J. Cherryh? I ask because my first thought for a better example came straight out of there - she does a god job of presenting alien moralities)

Let me provide a better one. Consider Marvin, and Fred. Marvin's moral system considers the total benefit to the world of every action; but he tends to weight actions in favour of himself, because he knows that in the future, he will always choose to do the right thing (by his morality) and thus deserves ties broken in his favour.

Fred's moral system entirely discounts any benefits to himself. He knows that most people are biased to themselves, and does this in an attempt to reduce the bias (he goes so far as to be biased in the opposite direction).

Both of them get into a war. Both end up in the following situation:

Trapped in a bunker, together with one allied soldier (a stranger, but on the same side). An enemy manages to throw a grenade in. The grenade will kill both of them, unless someone leaps on top of it, in which case it will only kill that one.

Fred leaps on top of the grenade. His morality values the life of the stranger over his own, and he thus acts to save the stranger first.

Marvin throws the stranger onto the grenade. His morality values his own life over a stranger who might, with non-trivial probability, be a truly villainous person.

Here we have two different moralities, leading to two different results, in the same situation.

I think the problem is that an objectively correct set of moral rules that could perfectly evaluate any situation would be so complicated no one would be able to use it effectively. Even if we obtained such a system we would have to use crude approximations until we managed to get a supercomputer big enough to do the calculations in a timely manner.

That is worth keeping in mind. Of course, if such a system is found, we could feed in dozens of general situations in advance - and if in a tough situation, then after resolving it one way or another, we could feed that situation into the computer and find out for future reference which course of action was correct (that eliminates a lot of the time constraint).

Comment author: Ghatanathoah 28 October 2012 07:02:38PM *  -1 points [-]

That seems like a reasonable definition; my point is that not everyone uses the same equation.

That's true, the question is, how often is this because people have totally different values, and how often is it that they have extremely similar "ideal equations," but different "approximations" of what they think that equation is. I think for sociopaths, and other people with harmful ego-syntonic mental disorders it's probably the former, but its more often the later for normal people.

Eliezer has argued that it is confusing and misleading to use the word "morality" to refer to codes of behavior entities possess that have nothing to do with improving people's wellbeing, making the world a happier, fairer, freer place, and similar concepts. He argues that creatures like the Pebblesorters do not care about morality at all, they care about sorting pebbles and calling sorting pebbles a type of "morality" confuses two separate concepts.

Let me provide a better one. Consider Marvin, and Fred.

It sounds to me like Fred and Marvin both care about achieving similar moral objectives, but have different ideas about how to go about it. I'd say that again, which moral code is better can only be determined by trying to figure out which one actually does a better job of achieving moral goals. "Moral progress" can be regarded as finding better and better heuristics to achieve those moral goals, and finding a closer representation of the ideal equation.

Again, I think I agree with Eliezer that a truly alien code of behavior, like that exhibited by sociopaths, and really inhuman aliens like the Pebblesorters or paperclippers, should maybe be referred to by some word other than morality. This is because since the word "morality" usually refers to doing things like making the world a happier place and increasing the positive things in life. So if we refer to the behavior code of a creature that cares nothing for doing those things as "morality," we will give the subconscious impression that that creature really does care about doing good and simply disagrees about how to go about it. This isn't correct, a sociopaths and paperclippers doesn't care about other people at all, so we shouldn't give the impression they do.

I am less sure about whether the term "morality" should be used to refer to the behavior codes of aliens that care about some of the same positive things that normal humans do, but also differ in important ways, like the Babyeaters and Super-Happy-People. Maybe we could call it "semi-morality?"

(I don't know if you're familiar with the Chanur series, by C. J. Cherryh? I ask because my first thought for a better example came straight out of there - she does a god job of presenting alien moralities)

Sorry, the only Cherryh I've read is "The Scapegoat." I thought it gave a good impression of how alien values would look to humans, but wish it had given some more ideas about what it was that made elves think so differently.

Comment author: TheOtherDave 26 October 2012 02:57:06PM 1 point [-]

Hmmm. To avoid Omelas, equality would have to be fairly heavily weighted; any finite weighting given to equality, however, will simply mean that Omelas is only possible given a sufficiently large population (by balancing the cost of the inequality with the extra happiness of the extra inhabitants).

Well, if we're really going to take Omelas seriously as our test case, then presumably we also have to look at how much that "extra happiness" (or whatever else we're putting in the plus column) is reduced by those who walk away from it, and by those who are traumatized by it, and so forth. It might turn out that increasing the population doesn't help.

But that's just a quibble. I basically agree: once we swallow the assumption that for some reason we neither understand nor can ameliorate, the happiness of the many ineluctably depends on the misery of the few, then a total-utilitarian approach either says that equality is the most important factor in utility (which is a problem like you describe), or endorses the few being miserable.

That's quite an assumption to swallow, though. I have no reason to believe it's true of the world I live in.

A weaker version that might be true of the world I actually live in is that concentrating utility-generating resources in fewer hands results in higher total utility-from-all-sources-other-than-equality (Ua) but more total-disutility-from-inequality (Ub). But it's not quite as clear that our (Ua, Ub) preferences are lexicographic.

Comment author: CCC 28 October 2012 02:45:50PM 1 point [-]

Well, if we're really going to take Omelas seriously as our test case, then presumably we also have to look at how much that "extra happiness" (or whatever else we're putting in the plus column) is reduced by those who walk away from it, and by those who are traumatized by it, and so forth. It might turn out that increasing the population doesn't help.

Doubling the population should double the happiness; double the trauma; double the people who walk away. The end result should be (assuming a high enough population that the Law of Large Numbers is a reasonable heuristic) about twice the utility.

A weaker version that might be true of the world I actually live in is that concentrating utility-generating resources in fewer hands results in higher total utility-from-all-sources-other-than-equality (Ua) but more total-disutility-from-inequality (Ub). But it's not quite as clear that our (Ua, Ub) preferences are lexicographic.

Consider the case of farmland; larger farms produce more food-per-acre than smaller farms. (Why? Because larger farms attract commercial farmers with high-intensity farming techniques; and they can buy better farming equipment with their higher profits). Now, in the case of farmland, the optimal scenario is not equality; you don't want everyone to have the same amount of farmland, you want those who are good at farming to have most of it. (For a rather dramatic example of this, see the Zimbabwe farm invasions).

On the other hand, consider the case of food itself. Here, equality is a lot more important; giving one man food for a hundred while ninety-nine men starve is clearly a failure case, as a lot of food ends up going rotten and ninety-nine people end up dead.

So the optimal (Ua, Ub) ordering depends on exactly what it is that is being ordered; there is no universally correct ordering.

Comment author: TheOtherDave 28 October 2012 03:36:40PM 1 point [-]

You seem to be assuming a form of utility that is linear with happiness, with trauma, with food-per-acre, with starving people, etc.
I agree with you that if we calculate utility this way, what you say follows.
It's not clear to me that we ought to calculate utility this way.

Comment author: nshepperd 26 October 2012 11:24:25AM 1 point [-]

What does seem to work is to pick a society whose inhabitants seem happy and fulfilled, and trying to use whatever rules they use.

If you're going to do that, why not just directly use happiness and fulfillment?

Comment author: CCC 26 October 2012 12:15:44PM 1 point [-]

If you're going to do that, why not just directly use happiness and fulfillment?

I cannot create an entire ethical framework, for everyone to follow, on any basis, and expect that it will be able to hold up for the next thousand years. If I try, I will fail, and this is why: because people cheat. Many intelligent agents will poke at the rules, seeking a possible exploit thereof that enhances their success at the possible expense of their neighbours' success. Over the next thousand years, there will be thousands, probably millions, of such intelligent agents hunting for, and attempting to exploit, flaws in the system; people who stick by the letter of the rule, and avoid the spirit of the rule. I cannot create an entire ethical framework, because I cannot outwit thousands or millions of future peoples' attempts to find and exploit gaps and loopholes in my framework.

Hence, the best that I can do is to find a system that has already endured a period of field testing and that hasn't broken yet; and perhaps attempt a small, incremental improvement (no more) in order to test that improvement.

Comment author: nshepperd 26 October 2012 01:03:22PM 0 points [-]

What does that have to do with the situation at hand? Morality is an abstract division of actions into right and wrong, not some set of laws laid down by philosophers on the rest of the population. If you're trying to work out what you mean by "morality" and use some criteria (such as something including happiness and fulfillment of populations which adopt that definition) to choose from a bunch of alternatives, then probably those criteria themselves are the most accurate definition of "morality" you could hope to find. I might add, in [almost] exactly the same way that a program which writes and then executes a program to add two numbers is, in fact, itself a program that adds two numbers.

You can write out your final definition in legalese later, if the situation calls for it.

Comment author: CCC 26 October 2012 01:55:45PM 1 point [-]

What does that have to do with the situation at hand? Morality is an abstract division of actions into right and wrong, not some set of laws laid down by philosophers on the rest of the population.

Morality comes with an implicit rule; when it says that "this action is the right action to take in this situation", then the implicit rule is "if you find yourself in this situation, take this action". There is usually no Morality Policeman ready to administer punishment if the rule is not followed, and the choice to follow the rule or not remains; but the rule is there.

f you're trying to work out what you mean by "morality" and use some criteria (such as something including happiness and fulfillment of populations which adopt that definition) to choose from a bunch of alternatives, then probably those criteria themselves are the most accurate definition of "morality" you could hope to find.

The difficulty is that I know that the algorithm that I am following is very likely not to fulfil the criteria in the very best possible way; merely in (more or less) the best possible way that they have been fulfilled in the past. If I simply list the criteria, then I falsely imply that the chosen system of morality is the best fit for those criteria; and I am trying to avoid that implication.

Comment author: Peterdjones 26 October 2012 07:50:36AM 0 points [-]

Define it, or defend it? There are a lot of defences, but not so much definitions.

Comment author: Jayson_Virissimo 26 October 2012 07:36:40AM 1 point [-]

I think the metaphor misses something important here, because the number of pebbles seems completely arbitrary. What, if anything, would change if in the pebble-sorters' ancestral environment, preferring 13-pebble heaps was adaptive, but preferring 11-pebble heaps (or spending resources on that that do) was not?

Comment author: wedrifid 26 October 2012 10:00:45AM *  2 points [-]

I think the metaphor misses something important here, because the number of pebbles seems completely arbitrary. What, if anything, would change if in the pebble-sorters' ancestral environment, preferring 13-pebble heaps was adaptive, but preferring 11-pebble heaps (or spending resources on that that do) was not?

Preferring other people like Larry to be homosexual is adaptive for me. And it is the judgement by others (and the implicit avoidance of that through shame) that we are considering here. That said:

I think the metaphor misses something important here

Absolutely, and the entire line of reasoning relies on conveying the speaker's own morality ("it is second-order 'right' to be homosexual") on others without making it explicit.

Comment author: MugaSofer 26 October 2012 08:23:37AM *  0 points [-]

The same reason sorting pebbles into correct heaps was adaptive in the first place.

EDIT: Wait, does it matter that homosexuality is probably not adaptive?

Comment author: Jayson_Virissimo 26 October 2012 08:48:37AM 0 points [-]

Wait, does it matter that homosexuality is probably not adaptive?

That was the point of my comment. There is a large disanalogy between heterosexuality and 13-pebble heap preference (namely, the first highly adaptive, but the second has no apparent reason to be). Although, I'm not sure if that is enough to break the metaphor.

Comment author: MugaSofer 26 October 2012 09:01:12AM 1 point [-]

There are many properties homosexuality has but 11-pebble heap preference don't, and vice versa. Why is evolutionary maladaptiveness worth pointing out, is my question.

Comment author: Jayson_Virissimo 26 October 2012 09:13:14AM *  1 point [-]

There are many properties homosexuality has but 11-pebble heap preference don't, and vice versa. Why is evolutionary maladaptiveness worth pointing out, is my question.

Well, if moral norms are the Nash equilibria that result from actual historical bargaining situations (that are determined largely by human nature and the ancestral environment), then it seems somewhat relevant. If moral norms are actually imperative sentences uttered by God, then it seems completely irrelevant. Etc...

I suppose whether or not the pebble-sorting metaphor is good depends on which meta-ethical theory is true. In other words, I'm agreeing with PhilGoetz; Example 2 and Example 3 are only in separate classes of meta-wants assuming a (far from universally shared) moral system.

Comment author: Ghatanathoah 26 October 2012 10:32:48AM 0 points [-]

Well, if moral norms are the Nash equilibria that result from actual historical bargaining situations

I would regard moral norms as useful heuristics for achieving morally good results, not as morality in and of itself.

I suppose whether or not the pebble-sorting metaphor is good depends on which meta-ethical theory is true.

I think that some sort of ethical naturalism (or "moral cognitivism" as Eliezer calls it) is correct, where "morally good" is somewhat synonymous with "helps people live lives full of positive values like love, joy, freedom, fairness, high challenge, etc." There is still much I'm not sure of, but I think, that is probably pretty close to the meaning of right. In Larry's case I would argue that homosexual relationships usually do help people live such lives.

Comment author: MugaSofer 26 October 2012 09:30:18AM -1 points [-]

Oh, you mean that humans might genuinely dislike homosexuality as a terminal value, because evo-psych.

... huh.

Comment author: MugaSofer 26 October 2012 09:56:10AM 0 points [-]

Incidentally, it's easier to sort pebbles into heaps of 11. The original pebblesorters valued larger heaps, but had a harder time determining their correctness.

Comment author: Ghatanathoah 26 October 2012 10:03:41AM 1 point [-]

That's why I was careful to refer to them as 11-Pebble and 13-Pebble Favorers. They do value other sizes of pebble heaps, 11 and 13 are just the numbers they do most frequently. Or perhaps 11 and 13 are the heaps they like making in their personal time, but they like larger prime numbers for social pebble-sorting endeavors. The point is, I said they "favored" that size because I wanted to make sure that the ease of sorting the piles didn't seem too relevant, since that would distract away from the central metaphor.

Comment author: MugaSofer 26 October 2012 10:08:01AM 1 point [-]

Oops.