Yvain2 comments on The Bedrock of Morality: Arbitrary? - Less Wrong

16 Post author: Eliezer_Yudkowsky 14 August 2008 10:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (113)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Yvain2 16 August 2008 08:55:00PM 12 points [-]

To say that Eliezer is a moral relativist because he realizes that a primality sorter might care about primality rather than morality, is equivalent to calling him a primality relativist because he realizes that a human might care about morality rather than primality.

But by Eliezer's standards, it's impossible for anyone to be a relativist about anything.

Consider what Einstein means when he says time and space are relative. He doesn't mean you can just say whatever you want about them, he means that they're relative to a certain reference frame. An observer on Earth may think it's five years since a spaceship launched, and an observer on the spaceship may think it's only been one, and each of them is correct relative to their reference frame.

We could define "time" to mean "time as it passes on Earth, where the majority of humans live." Then an observer on Earth is objectively correct to believe that five years have passed since the launch. An observer on the spaceship who said "One year has passed" would be wrong; he'd really mean "One s-year has passed." Then we could say time and space weren't really relative at all, and people on the ground and on the spaceship were just comparing time to s-time. The real answer to "How much time has passed" would be "Five years."

Does that mean time isn't really relative? Or does it just mean there's a way to describe it that doesn't use the word "relative"?

Or to give a more clearly wrong-headed example: English is objectively the easiest language in the world, if we accept that because the word "easy" is an English word it should refer to ease as English-speakers see it. When Kyousuke says Japanese is easier for him, he really means it's mo wakariyasui translated as "j-easy", which is completely different. By this way of talking, the standard belief that different languages are easier, relative to which one you grew up speaking, is false. English is just plain the easiest language.

Again, it's just avoiding the word "relative" by talking in a confusing and unnatural way. And I don't see the difference between talking about "easy" vs. "j-easy" and talking about "right" vs. "p-right".

Comment author: [deleted] 28 January 2012 11:47:23PM 0 points [-]

Or to give a more clearly wrong-headed example: English is objectively the easiest language in the world, if we accept that because the word "easy" is an English word it should refer to ease as English-speakers see it. When Kyousuke says Japanese is easier for him, he really means it's mo wakariyasui translated as "j-easy", which is completely different. By this way of talking, the standard belief that different languages are easier, relative to which one you grew up speaking, is false. English is just plain the easiest language.

Until I read that, I thought I understood (and agreed with) Eliezer's point, but that got me thinking. Now, I guess Eliezer would agree that it's easy for Japanese people to speak Japanese, while he wouldn't agree that it's right for Baby-Eaters to keep on eating their children. So there must be something subtler I'm missing.

Comment author: TheOtherDave 29 January 2012 12:04:58AM *  1 point [-]

FWIW, my understanding of the original claim was precisely that morality is special in this way: that it means something to describe what humans value as "right" compared to what nonhumans value (and what nobody values), whereas it doesn't mean anything analogous to describe the languages humans speak as "easily speakable" compared to the languages nonhumans speak (and the languages nobody speaks). And whatever that something is, eating babies simply doesn't possess it, even for Baby-Eaters.

Personally I've never understood what that something might be, though, nor seen any evidence that it exists.

Comment author: nshepperd 16 May 2012 01:23:29AM *  0 points [-]

Have you forgotten that what it means to describe something by a word is given precisely by the sense of that word that the speaker has in mind? That you can call eudaimonia "right", and heaps of prime pebbles "prime" is a fact about the words "right" and "prime" as used by humans, not about eudaimonia and pebbles themselves (except insofar as eudaimonia and prime-pebbled heaps by their nature satisfy the relevant definitions of "right" and "prime", of course). Is English the easiest language, if you define "easiest" as "easiest for an English-speaker to speak"? How many legs does a dog have, if you call a tail a leg?

Comment author: TheOtherDave 16 May 2012 03:09:21AM 0 points [-]

When I assert "eudaimonia is right" (supposing I believed that), there exist two structures in my brain, S1 and S2, such that S1 is tagged with the lexical entry "right" and S2 is tagged with the lexical entry "eudaimonia", and S1 and S2 are related such that if my brain treats some thing X as an instance of S2, it also treats X as having the property S1.

Well, for a certain use of "is," anyway.

Comment author: nshepperd 16 May 2012 04:33:11AM *  0 points [-]

I was going to ask how that relation came about, and how it behaves when your brain is computing counterfactuals... but even though those are good questions to consider, I realised that wouldn't really be that helpful. So...

What I'm really trying to say is that there's nothing special about morality at all. There doesn't have to be anything special about it for eudaimonia to be right and for pebble-sorting to not. It's just a concept, like every other concept. One that includes eudaimonia and excludes murder, and is mostly indifferent to pebble-sorting. Same as the concept prime includes {2, 3, 5, 7, ...} and excludes {4, 6, 8, 9, ...}.

The only thing remotely "special" about it is that it happens to be a human terminal value -- which is the only reason we care enough to talk about it in the first place. The only thing remotely special about the word "right" is that it happens to mean, in english, this morality-concept (which happens to be a human terminal value).

So, to say that "eudaimonia is right" is simply to assert that eudaimonia is included in this set of things that includes eudaimonia and excludes murder (in other words, yes, "X ∈ S2 implies X ∈ S1", where S2 is eudaimonia and S1 is morality). To say that what babyeaters value is right would be to assert that eating babies is included in this set ("X ∈ babyeating implies X ∈ S1", which is clearly wrong, since murderbabyeating).

Comment author: Ghatanathoah 16 May 2012 07:03:18AM -1 points [-]

I generally agree with everything you say here, except that I'd like to clarify what you mean by "special" when you say that morality need not be special, as I'm not sure it would be clear to everyone reading your post. Obviously morality has no mystical properties or anything. It isn't special in that sense, which is what I think you mean.

But morality does differ (in a totally nonmystical way) from many other terminal values in being what Eliezer calls "subjectively objective and subjunctively objective." That is, there is only one way, or at least an extremely limited number of ways, to do morality correctly. Morality is not like taste, it isn't different for every person.

You obviously already know this, but I think that it's important to make that point clear because this subject has huge inferential distances. Hooray for motivational externalism!

Comment author: TheOtherDave 16 May 2012 01:41:28PM 0 points [-]

Yeah, it's precisely the assumption that the computation we refer to by "morality" is identical for every human that makes this whole approach feel inadequate to me. It's just not clear to me that this is true, and if it turns out not to be true, then we're faced with the problem of reconciling multiple equally valid moralities.

Of course, one approach is to stop caring about humans in general, and only care about that subset of humanity that agrees with me.

Comment author: nshepperd 16 May 2012 06:10:50PM *  0 points [-]

You mean, the assumption that every human uses the word "morality" to refer to the same computation. Clearly, if I use "morality" to refer to X, and you also use the word "morality" to refer to X, then X and X are identical trivially. We refer to the same thing. Keep careful track of the distinction between quotation and referent.

Anyway, before I answer, consider this...

If other people use "morality" to refer to something else... then what? How could it matter how other people use words?

Comment author: TheOtherDave 16 May 2012 06:41:11PM 1 point [-]

I agree that if you and I both use "morality" to refer to X, then we refer to the same thing.

If I use "morality" to refer to X1 and you use it to refer to X2, it doesn't matter at all, unless we try to have a conversation about morality. Then it can get awfully confusing. Similar things are true if I use "rubber" to refer to a device for removing pencil marks from paper, and you use "rubber" to refer to a prophylactic device... it's not a problem at all, unless I ask you to fetch a bunch of rubbers from the supply cabinet for an upcoming meeting.

Comment author: Ghatanathoah 17 May 2012 04:17:16AM *  -1 points [-]

As I said before:

Now, there might be room for moral disagreement in that people care about different aspects of wellbeing more. But that would be grounds for moral pluralism, not moral relativism. Regardless of what specific aspects of morality people focus on, certain things, like torturing the human population for all eternity, would be immoral [wellbeing non-enhancing] no matter what.

If morality refers to a large computation related to the wellbeing of eudaemonic creatures it might be possible that some people value different aspects of wellbeing more than others (i.e. some people might care more about freedom, others more about harm). But there'd still be a huge amount of agreement.

I think a good analogy is with the concept of "health." It's possible for people to care about different aspects of health more. Some people might care more about nutrition, others about exercise. But there are very few ways to be healthy correctly, and near infinite ways to be unhealthy. And even if someone thinks you have your priorities wrong when trying to be healthy, they can still agree that your efforts are making you healthier than no effort at all.

Of course, one approach is to stop caring about humans in general, and only care about that subset of humanity that agrees with me.

I care about the wellbeing of animals to some extent, even though most of them don't care about morality at all. I also care, to a limited extent, about the wellbeing of sociopathic humans even though they don't care about morality at all. I admit that I don't care about them as much as I do about moral beings, but I do care.

If other moral humans have slightly different moral priorities from you I think they'd still be worth a great deal of caring. Especially if you care at all about animals or sociopaths, who are certainly far less worthy of consideration than people who merely disagree with you about some aspect of morality.

Comment author: TheOtherDave 17 May 2012 01:44:52PM 0 points [-]

I agree that we should expect significant (though not complete) overlap within the set of moral judgments made by all humans.

I would expect even more overlap among those made by non-pathological humans, and even more overlap among those made by non-pathological humans who share a cultural heritage.

I would expect less overlap (though not zero) among the set of moral judgments made by non-humans.

I agree that if statement X (e.g. "murder is wrong") is endorsed by all the moral judgments in a particular set, then the agents making those judgments will all agree that X is right, although perhaps to different degrees depending on peripheral particulars.
Similarly, if statement Y is not endorsed by all the moral judgments in a particular set, then the agents making those judgments will not all agree that X is right.

It's clear in the first case that right action is to abide by the implications of X.
In the second case, it's less clear what right action is.

Comment author: TheOtherDave 16 May 2012 01:36:33PM 0 points [-]

So, let us assume there exists some structure S3 in my head that implements my terminal values.

Maybe that's eudaimonia, maybe that's Hungarian goulash, I don't really know what it is, and am not convinced that it's anything internally coherent (that is, I'm perfectly prepared to believe that my S3 includes mutually exclusive states of the world).

I agree that when I label S3 "morality" I'm doing just what I do when I label S2 "eudaimonia" or label some other structure "prime". There's nothing special about the label "morality" in this sense. And if it turns out that you and I have close-enough S3s and we both label our S3s "morality," then we mean the same thing by "morality." Awesome.

If, OTOH, I have S3 implementing my terminal values and you have some different structure, S4, which you also label "morality", then we might mean different things by "morality".

Some day I might come to understand S3 and S4 well enough that I have a clear sense of the difference between them. At that point I have a lexical choice.

I can keep associating S3 with the label "morality" or "right" and apply some other label to S4 (e.g., "pseudo-morality" or "nsheppard's right" or whatever). You might do the same thing. In that case, if (as you say) the only thing that might be remotely special about the label "morality" or "right" is that it might happens to refer to human terminal value, then it follows that there's nothing special about that label in that case, since in that case it doesn't refer to a common terminal value. It's just another word.

Conversely, I can choose to associate the label "morality" or "right" with some new S5, a synthesis of S3 and S4... perhaps their intersection, perhaps something else. You might do the same thing. At that point we agree that "morality" means S5, even though S5 does not implement either of our terminal values.

Comment author: Ghatanathoah 16 May 2012 12:33:07AM *  0 points [-]

Again, it's just avoiding the word "relative" by talking in a confusing and unnatural way. And I don't see the difference between talking about "easy" vs. "j-easy" and talking about "right" vs. "p-right".

The reason people think that Eliezer is really a relativist is that they see concepts like "good" and "right" as reducing down to mean, "the thing that I [the speaker, whoever it is] values." Eliezer is arguing that that is not what they reduce down to. He argues that "good" and "right" reduce down to something like "concepts related to enhancing the wellbeing of conscious eudaemonic life forms." It's not a trick of the language, Eliezer is arguing that "right" refers to [wellbeing related concept] and p-right refers to [primality sorting related concept]. The words "good" and "right" might be relative but the referent [wellbeing of conscious eudaemonic life forms] is not. The reason Eliezer focuses on fairness is that the concept of fairness is less nebulous than the concept of "right" so it is easier to see that it is not arbitrary.

Pebble sorters and humans can both objectively agree on what it means to enhance the wellbeing of conscious eudaemonic life forms. Where they differ is whether they care about doing it. Pebble sorters don't care about the wellbeing of others. Why would they, unless it happened to help them sort pebbles?

Similarly, humans and pebble sorters can both agree on which pebble heaps are prime-numbered. Where they differ is if they care about sorting pebbles. Humans don't care about pebble-sorting. Why would they, unless it helped then enhance the wellbeing of themselves and others?

So if you define morality as "the thing that I care about," then I suppose it is relative, although I think that is not a proper use of the word "morality." But if you define it as "enhancing the wellbeing of eudaemonic life forms" then it is quite objective.

Now, there might be room for moral disagreement in that people care about different aspects of wellbeing more. But that would be grounds for moral pluralism, not moral relativism. Regardless of what specific aspects of morality people focus on, certain things, like torturing the human population for all eternity, would be immoral [wellbeing non-enhancing] no matter what.

So what is the difference between easy vs j-easy and right vs p-right? Well, easy and j-easy both refer to the concept "can be done with little effort expended, even by someone who is completely new and unpracticed in it." English is not "easy" because only those practiced in it can speak it with little effort expended. Ditto for Japanese. The concept is the same in both languages. "Right," by contrast, refers to "enhances wellbeing of eudaemonic creatures," while p-right refers to "sorting pebbles in prime numbered heaps" They are two completely different concepts and that fact has nothing to do with the language being used.