Ishaan comments on Why didn't people (apparently?) understand the metaethics sequence? - Less Wrong

12 Post author: ChrisHallquist 29 October 2013 11:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (229)

You are viewing a single comment's thread.

Comment author: Ishaan 30 October 2013 03:36:48AM *  22 points [-]

I think what confuses people is that he

1) claims that morality isn't arbitrary and we can make definitive statements about it

2) Also claims no universally compelling arguments.

The confusion is resolved by realizing that he defines the words "moral" and "good" as roughly equivalent to human CEV.

So according to Eliezer, it's not that Humans think love, pleasure, and equality is Good and paperclippers think paperclips are Good. It's that love, pleasure, and equality are part of the definition of good, while paperclips are just part of the definition of paperclippy. The Paperclipper doesn't think paperclips are good...it simply doesn't care about good, instead pursuing paperclippy.

Thus, moral relativism can be decried while "no universally compelling arguments" can be defended. Under this semantic structure, Paperclipper will just say "okay, sure...killing is immoral, but I don't really care as long as it's paperclippy."

Thus, arguments about morality among humans are analogous to Pebblesorter arguments about which piles are correct. In both cases, there is a correct answer.

It's an entirely semantic confusion.

I suggest that ethicists aught to have different words for the various different rigorized definitions of Good to avoid this sort of confusion. Since Eliezer-Good is roughly synonymous to CEV, maybe we can just call it CEV from now on?

Edit: At the very least, CEV is one rigorization of Eliezer-Good, even if it doesn't articulate everything about it. There are multiple levels of rigor and naivety that may be involved here. Eliezer-good is more rigorous than "good" but might not capture all the subtleties of the naive conception. CEV is more rigorous than Eliezer-good, but it might not capture the full range of subtleties within Eliezer-good (and it's only one of multiple ways to rigorize Eliezer-good...consider Coherent Aggregate Volition, for example, as an alternative rigorization of Eliezer-good).

Comment author: RobbBB 30 October 2013 07:29:06AM 9 points [-]

I think what confuses people is that he 1) claims that morality isn't arbitrary and we can make definitive statements about it 2) Also claims no universally compelling arguments.

How does this differ from gustatory preferences?

1a) My preference for vanilla over chocolate ice cream is not arbitrary -- I really do have that preference, and I can't will myself to have a different one, and there are specific physical causes for my preference being what it is. To call the preference 'arbitrary' is like calling gravitation or pencils 'arbitrary', and carries no sting.

1b) My preference is physically instantiated, and we can make definitive statements about it, as about any other natural phenomenon.

2) There is no argument that could force any and all possible minds to like vanilla ice cream.

I raise the analogy because it seems an obvious one to me, so I don't see where the confusion is. Eliezer views ethics the same way just about everyone intuitively view aesthetics -- as a body of facts that can be empirically studied and are not purely a matter of personal opinion or ad-hoc stipulation -- facts, though, that make ineliminable reference to the neurally encoded preferences of specific organisms, facts that are not written in the sky and do not possess a value causally independent of the minds in question.

It's an entirely semantic confusion.

I don't know what you mean by this. Obviously semantics matters for disentangling moral confusions. But the facts I outlined above about how ice cream preference works are not linguistic facts.

Comment author: Ishaan 30 October 2013 06:25:58PM *  4 points [-]

Good [1] : The human consensus on morality, the human CEV, the contents of Friendly AI's utility function, "sugar is sweet, love is good". There is one correct definition of Good. "Pebblesorters do not care about good or evil, they care about grouping things into primes. Paperclippers do not care about good or evil, they care about paperclips".

Good[2] : An individual's morality, a special subset of an agent's utility function (especially the subset that pertains to how everyone aught to act). "I feel sugar is yummy, but I don't mind if you don't agree. However, I feel love is good, and if you don't agree we can't be friends."... "Pebblesorters think making prime numbered pebble piles is good. Paperclippers think making paperclips is good". (A pebblesorter might selfishly prefer maximize the number of pebble piles that they make themselves, but the same pebblesorter believes everyone aught to act to maximize the total number of pebble piles, rather than selfishly maximizing their own pebble piles. A perfectly good pebblesorter seeks only to maximize pebbles. Selfish pebblesorters hoard resources to maximize their own personal pebble creation. Evil pebblesorters knowingly make non-prime abominations.)

so I don't see where the confusion is.

Do you see what I mean by "semantic" confusion now? Eliezer (like most moral realists, universalists, etc) is using Good[1]. Those confused by his writing (who are accustomed to descriptive moral relativism, nihilism, etc) are using Good[2]. The maps are actually nearly identical in meaning, but because they are written in different languages it's difficult to see that the maps are nearly identical.

I'm suggesting that Good[1] and Good[2] are sufficiently different that people who talk about morality often aught to have different words for them. This is one of those "If a tree falls in the forest does it make a sound" debates, which are utterly useless because they center entirely around the definition of sound.

Eliezer views ethics the same way just about everyone intuitively view aesthetics -- as a body of facts that can be empirically studied and are not purely a matter of personal opinion or ad-hoc stipulation -- facts, though, that make ineliminable reference to the neurally encoded preferences of specific organisms, facts that are not written in the sky and do not possess a value causally independent of the minds in question.

Yup, I agree completely, that's exactly the correct way to think about it. The fact that you are able to give a definition of what ethics are while tabooing words like good and bad and moral, is the reason that you can simultaneously uphold Good[2] with your gustatory analogy and still understand that Eliezer doesn't disagree with you even though he uses Good[1].

Most people's thinking is too attached to words to do that, so they get confused. Being able to think about what things are without referencing any semantic labels is a skill.

Comment author: buybuydandavis 30 October 2013 08:51:44AM 1 point [-]

I raise the analogy because it seems an obvious one to me, so I don't see where the confusion is.

Your analysis clearly describes some of my understanding of what EY says. I use "yummy" as a go to analogy for morality as well. But, EY also seems to be making a universalist argument, as least for "normal" humans. Because he talks about abstract computation, leaving particular brains behind, it's just unclear to me whether he's a subjectivist or a universalist.

The "no universally compelling argument" applies to Clippy versus us, but is there also no universally compelling argument with all of "us" as well?

Comment author: Jack 30 October 2013 11:29:11AM *  -1 points [-]

"Universalist" and "Subjectivist" aren't opposed or conflicting terms. "Subjective" simply says that moral statements are really statements about the attitudes or opinions of people (or something else with a mind). The opposing term is "objective". "Universalist" and "relativist" are on a different dimension from subjective and objective. Universal vs. relative is about how variable or not variable morality is.

You could have a metaethical theory that morality is both objective and relative. For example, you could define morality as what the law says and it will be relative from country to country as laws differ. You could also have a subjective and universal meta-ethics. Morality judgments could be statements about the attitudes of people but all people could have the same attitudes.

I take Eliezer to hold something like the latter-- moral judgments aren't about people's attitudes simpliciter: they're about what they would be if people were perfectly rational and had perfect information (he's hardly the first among philosophers, here). It is possible that the outcome of that would be more or less universal among humans or even a larger group. Or at least it some subset of attitudes might be universal. But I could be wrong about his view: I feel like I just end up reading my view into it whenever I try to describe his.

Comment author: TheAncientGeek 04 November 2013 07:02:53PM 0 points [-]

"Universalist" and "Subjectivist" aren't opposed or conflicting terms. "Subjective" simply says that moral statements are really statements about the attitudes or opinions of people (or something else with a mind). The opposing term is "objective". "Universalist" and "relativist" are on a different dimension from subjective and objective. Universal vs. relative is about how variable or not variable morality is.

If morality varies with individuals, as required by subjectivism, it is not at all universal, so the two are not orthogonal.

You could have a metaethical theory that morality is both objective and relative. For example, you could define morality as what the law says and it will be relative from country to country as laws differ.

If morality is relative to groups rather than individuals, it is still relative, Morality is objective when the truth values of moral statements don't vary with individuals or groups, not when it varies with empirically discoverable facts.

You could also have a subjective and universal meta-ethics. Morality judgments could be statements about the attitudes of people but all people could have the same attitudes.

Comment author: Jack 05 November 2013 03:31:27AM 0 points [-]

If morality varies with individuals, as required by subjectivism, it is not at all universal, so the two are not orthogonal.

Subjectivism does not require that morality varies with individuals.

Morality is objective when the truth values of moral statements don't vary with individuals or groups

No, see the link above.

Comment author: TheAncientGeek 05 November 2013 09:08:07AM 0 points [-]

The link supports what I said. Subjectivism requires that moral claims have truth values which , in principle, dpened on the individual making them. It doesn't mean that any two people will necessarily have a different morality, but why would I assert that?

Comment author: Jack 05 November 2013 10:22:54AM 0 points [-]

Subjectivism requires that moral claims have truth values which , in principle, dpened on the individual making them

This is not true of all subjectivisms, as the link makes totally clear. Subjective simply means that something is mind-dependent; it need not be the mind of the person making the claim-- or not only the mind of the person making the claim. For instance, the facts that determine whether or not a moral claim is true could consist in just the moral opinions and attitudes where all humans overlap.

Comment author: TheAncientGeek 05 November 2013 10:38:23AM 0 points [-]

There are people who use "subjective" to mean "mental", but they sholudn't.

Comment author: RobbBB 30 October 2013 10:34:16AM *  -1 points [-]

But, EY also seems to be making a universalist argument, as least for "normal" humans.

If you have in mind 'human universals' when you say 'universality', that's easily patched. Morality is like preferring ice cream in general, rather than like preferring vanilla ice cream. Just about every human likes ice cream.

Because he talks about abstract computation, leaving particular brains behind, it's just unclear to me whether he's a subjectivist or a universalist.

  1. The brain is a computer, hence it runs 'abstract computations'. This is true in essentially the same sense that all piles of five objects are instantiating the same abstract 'fiveness'. If it's mysterious in the case of human morality, it's not only equally mysterious in the case of all recurrent physical processes; it's equally mysterious in the case of all recurrent physical anythings.

  2. Some philosophers would say that brain computations are both subjective and objective -- metaphysically subjective, because they involve our mental lives, but epistemically objective, because they can be discovered and verified empirically. For physicalists, however, 'metaphysical subjectivity' is not necessarily a joint-carving concept. And it may be possible for a non-sentient AI to calculate our moral algorithm. So there probably isn't any interesting sense in which morality is subjective, except maybe the sense in which everything computed by an agent is 'subjective'.

  3. I don't know anymore what you mean by 'universalism'.

is there also no universally compelling argument with all of "us" as well?

There are universally compelling arguments for all adolescent or adult humans of sound mind. (And many pre-adolescent humans, and many humans of unsound mind.)

Comment author: Tyrrell_McAllister 31 October 2013 10:45:32PM *  2 points [-]

Since Eliezer-Good is roughly synonymous to CEV, maybe we can just call it CEV from now on?

This leaves out the "rigid designator" bit that people are discussing up-thread. Your formulation invites the response, "So, if our CEV were different, then different things would be good?" Eliezer wants the answer to this to be "No."

Perhaps we can say that "Eliezer-Good" is roughly synonymous to "Our CEV as it actually is in this, the actual, world as this world is right now."

Thus, if our CEV were different, we would be in a different possible world, and so our CEV in that world would not determine what is good. Even in that different, non-actual, possible world, what is good would be determined by what our actual CEV says is good in this, the actual, world.

Comment author: Eugine_Nier 31 October 2013 03:33:17AM 2 points [-]

1) claims that morality isn't arbitrary and we can make definitive statements about it

2) Also claims no universally compelling arguments.

Both these statements are also true about physics, yet nobody seems to be confused about it in that case.

Comment author: Ishaan 31 October 2013 03:59:53AM *  0 points [-]

What do you mean? Rational agents aught to converge upon what physics is.

Comment author: Eugine_Nier 31 October 2013 04:11:05AM 1 point [-]

Rational agents aught to converge upon what physics is.

Only because that's considered part of the definition of "rational agent".

Comment author: Ishaan 31 October 2013 05:03:58AM *  0 points [-]

Yes? But the recipient of an "argument" is implicitly an agent who at least partially understands epistemology. There is not much point in talking about agents which aren't rational or at least partly-bounded-rational-ish. Completely insensible things are better modeled as objects, not agents, and you can't argue with an object.

Comment author: TheAncientGeek 04 November 2013 07:47:09PM 1 point [-]

It's that love, pleasure, and equality are part of the definition of good, while paperclips are just part of the definition

And can aliens have love and pleasure, or is Good a purely human concept?

Comment author: Ishaan 04 November 2013 10:24:41PM 0 points [-]

By Eliezer's usage? I'd say aliens might have love and pleasure in the same way that aliens might have legs...they just as easily might not. Think "wolf" vs "snake" - one has legs and feels love while the other does not.

Comment author: TheAncientGeek 04 November 2013 11:27:04PM 1 point [-]

Let's say they have love and pleasure. Then why would want to define morality in a human centric way?

Comment author: TheAncientGeek 04 November 2013 06:51:53PM *  1 point [-]

1) claims that morality isn't arbitrary and we can make definitive statements about it

That isn't non-relativism. Subjectivism is the claim that the truth of moral statements varies with the person making them. That is compatible with the claim that they are non-arbitrary, since they may be fixed by features of persons that they cannot change, and which can be objectively discovered. It isn't a particularly strong version of subjectivism, though.

2) Also claims no universally compelling arguments.

That is;'t non-realism. Non-realism means that there are no arguments or evidence that will compel suitably equipped and motivated agents.

The confusion is resolved by realizing that he defines the words "moral" and "good" as roughly equivalent to human CEV.

The CEV of individual humans, or humanity? You have been ambiguous about an important subject EY is also ambiguous about.

Comment author: Ishaan 04 November 2013 10:32:41PM *  0 points [-]

I'm ambiguous about it because I'm describing EY's usage of the word, and he's been ambiguous about it.

I typically adapt my usage to the person who I'm talking to, but the way that I typically define "good" in my own head is: "The subset of my preferences which do not in any way reference myself as a person"...or in other words, the behavior which I would prefer if I cared about everyone equally (If I was not selfish and didn't prefer my in-group).

Under my usage, different people can have different conceptions of good. "Good" is a function of the agent making the judgement.

A pebble-sorter might selfishly want to make every pebble pile themselves, but they also might think that increasing the total number of pebble piles in general is "good". Then, according to the Pebblesorters, a "good" pebble-sorter would put overall-prime-pebble-pile-maximization above their own personal -prime-pebble-pile-productivity. According to the Babyeaters, "good" baby-eater would eat babies indiscriminately, even if they selfishly might want to spare their own. According to humans, Pebble sorter values are alien and baby-eater values are evil.

Comment author: passive_fist 30 October 2013 05:08:00AM *  0 points [-]

I think you're right here. He's saying, in a way, that moral absolutism only makes sense within context. Hence metaethics. It's kinda hard to wrap one's head around but it does make sense.