Bound_up comments on An attempt in layman's language to explain the metaethics sequence in a single post. - Less Wrong

1 Post author: Bound_up 12 October 2016 01:57PM

Comments (30)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheAncientGeek 12 October 2016 04:05:54PM *  1 point [-]

Unpacking "should" as " morally obligated to" is potentially helpful, so inasmuch as you can give separate accounts of "moral" and "obligatory".

The elves are not moral. Not just because I, and humans like me happen to disagree with them, no, certainly not. The elves aren’t even trying to be moral. They don’t even claim to be moral. They don’t care about morality. They care about “The Christmas Spirit,” which is about eggnog and stuff

That doesn't generalise to the point that non humans have no morality. You have made things too easy on yourself by having the elves concede that the Christmas spirit isn't morality. You need to to put forward some criteria for morality and show that the Christmas Spirit doesn't fulfil them. (One of the odd things about the Yudkowskian theory is that he doesnt feel the need to show that human values are the best match to some pretheoretic botion of morality, he instead jumps straight to the conclusion).

The hard case would be some dwarves, say, who have a behavioural code different from our own, and who haven't conceded that they are amoral. Maybe they have a custom whereby any dwarf who hits a rich seam of ore has to raise a cry to let other dwarves have a share, and any dwarf who doesn't do this is criticised and shunned. If their code of conduct passed the duck test .. is regarded as obligatory, involves praise and blame, and so on ... why isn't that a moral system?

This is so weird to them that they’d probably just think of it as…ehh, what? Just weird. They couldn’t care less. Why on earth would they give food to millions of starving children? What possible reason…who even cares?

If they have failed to grasp that morality is obligatory, have they understood it at all? They might continue caring more about eggnog, of course. That is beside the point... morality means what you should care about, not what you happen to do.

Morality needs to be motivating, and rubber stamping your existing values as moral achieves that, but being motivating is not sufficient. A theory of morality also needs to be able to answer the Open Question objection, meaning in this case, the objection that it is not obvious that you should value something just because you do.

So, to say the elves have their own “morality,” is not quite right. The elves have their own set of things that they care about instead of morality

That is arguing from the point that morality is a label for whatever humans care about, not toward it.

This helps us see the other problem, when people say that “different people at different times in history have been okay with different things, who can This is so weird to them that they’d probably just think of it as…ehh, what? Just weird. They couldn’t care less. Why on earth would they give food to millions of starving children? What possible reason…who even cares? who’s really right?”

There are many ways of refuting relativism, and most don't involve the claim that humans are uniquely moral.

Morality is a fixed thing. Frozen, if you will. It doesn’t change.

It is human value, or it is fixed.. choose one. Humans have valued many different things. One of the problems with the rubber stamping approach is that things the audience will see as immoral such as slavery and the subjugation of women have been part of human value.

Rather, humans change. Humans either do or don’t do the moral thing. If they do something else, that doesn’t change morality, but rather, it just means that that human is doing an immoral

If that is true, then you need to stop saying that morality is human values. and start saying morality is human values at time T. And justify the selection of time, etc. And even at that, you won't support your other claims. because what you need to prove is that morality is unique, that only one thing can fulfil the role.

Rather, humans happen to care about moral things. If they start to care about different things, like slavery, that doesn’t make slavery moral, it just means that humans have stopped caring about moral things.

If it is possible for human values to diverge from morality. then something else must define morality, because human values can't diverge from human values. So you are not using a stipulative definition... here....although you are when you argue that elves can't be moral. Here, you and Yudkowsky have noticed that your theory entails the same problem as relativism: if morality is whatever people value, and if what people happen to value is intuitively immoral , slavery, torture,whatever, then there's no fixed standard of morality. The label "moral" has been placed on a moving target. (Standard relativism usually has this problem synchronously , ie different communities are said to have different but equally valid moralities at the same time, but it makes little difference if you are asserting that the global community has different but equally valid moralities at different times)

So, when humans disagree about what’s moral, there’s a definite answer.

There is from many perspectives , but given that human values can differ, you get no definite answer by defining morality as human value. You can avoid the problems of relativism by setting up an external standard, and there are many theories of that type, but they tend to have the problem that the external standard is not naturalistic....God's commands, the Form of the good, and so on. I think Yudkowsky wants a theory that is non arbitrary and also naturalistic. I don't think he arrives a single theory that does both. If the Moral Equation is just a label for human intuition, then it ssuffers from all the vagaries of labeling values as moral, the original theory.

How do we find that moral answer, then? Unfortunately, there is no simple answer

Why doesn't that constitute an admission that you don't actually have a theory of morality?

You see, we don’t know all the pieces of morality, not so we can write them down on paper. And even if we knew all the pieces, we’d still have to weigh which ones are worth how much compared to each other.

On the assumption that all human value gets thrown into the equation, it certainly would be complex. But not everyone has that problem. since people have criteria for somethings being moral , and others but being. which simplify the equation. and allow you to answer the questions you were struggling with above. You know, you don't have to pursue assumptions to their illogical conclusions.

Humans all care about the same set of things (in the sense I’ve been talking about). Does this seem contradictory? After all, we all know humans do not agree about what’s right and wrong; they clearly do not all care about the same things.

On the face of it , it's contradictory. There maybe something else that is smooths out the contradictions, such as the Moral Equation, but that needs justification of its own.

Well, they do. Humans are born with the same Morality Equation in their brains, with them since birth.

Is that a fact? It's eminently naturalistic, but the flip side to that is that it is, therefore, empirically refutable. If an individual's Morality Equation is just how their moral intuition works, then the evidence indicates that intuitions can vary enough to start a war or two. So the Morality Equation appears not to be conveniently the same in everybody.

How then all their disagreements? There are three ways for humans to disagree about morals, even though they’re all born with the same morality equation in their heads (1 Don't do it, 2 don't do it right, 3 don't want to do it)

What does it mean to do it wrong, if the moral equation is just a label for black box intuitive reasoning? If you had an external standard, as utilitarians and others do, then you could determine whose use of intuition is right use according to it. But in the absence of an external standard, you could have a situation where both parties intuit differently, and both swear they are taking all factors into account. Given such a stalemate, how do you tell who is right? It would be convenient if the only variations to the output of the Morality Equation were caused by variations in the input, but you cannot assume something is true just because it would be convenient.

If the Moral Equation is something ideal and abstract, why can't aliens partake? That model of ethics is just what s needed to explain how you can have multiple varieties of object level morality that actually all are morality: different values fed into the same equation produce different results, so object level morality varies although the underlying principle us the same..

Comment author: Bound_up 12 October 2016 06:02:16PM 0 points [-]

Okay. By saying "If they have failed to grasp that morality is obligatory, have they understood it at all? They might continue caring more about eggnog, of course. That is beside the point... morality means what you should care about, not what you happen to do."

it seems you have not understood the idea. Were there any parts of the the post that seemed unclear that you think I might make clearer?

Because the whole point is that to say something is moral = you should do it = it is valued according to the morality equation.

For an Elf to agree something is moral is also to agree that they should do it. When I say they agree it's moral and don't care, that also means they agree they should do it and don't care.

Something being Christmas Spiritey = you Spiritould do it. Humans might agree that something is Christmas Spirit-ey, and agree that they spiritould do it, they just don't care about what they spiritould do, they only care about what they should do.

moral is to Christmas spiritey what "should" is to (make up a word like) "spiritould"

Obligatory is just a kind of "should." Elves agree that some things are obligatory, and don't care, they care about what's ochristmastory.

.

Likewise, to say that today's morality equation is the "best" is to say that today's morality equation is the equation which is most like today's morality equation. Tautology.

Best = most good, and good = valued by the morality equation.

Comment author: entirelyuseless 13 October 2016 02:15:44AM 0 points [-]

I think it is perfectly obvious that this usage of "should" and so on is wrong. A paperclipper believes that it should make paperclips, and it means exactly the same thing by "should" that I do when I say I should not murder.

And when I say it is obvious, I mean it is obvious in the same way that it is obvious that you are using the word "hat" wrong if you use it for a coat.

Comment author: Bound_up 15 October 2016 12:53:43PM 0 points [-]

I think you're using "should" to mean "feels compelled to do."

Yes, a paperclipper feels compelled to make paperclips, and a human feels compelled to make sentient beings happy.

But when we say "should," we don't just mean "whatever anyone feels compelled to do." We say "you might drug me to make me want to kill people, but I still shouldn't do it."

"Should" does not refer to compelling feelings, but rather to a certain set of states of beings that we value. To say we "still shouldn't kill people," means it "still isn't in harmony with happy sentient beings (plus a million other values) to kill people."

A paperclipper wouldn't disagree that killing people isn't in harmony with happy sentient beings (along with a million other values), it just wouldn't care. In other words, it wouldn't disagree that it shouldn't kill people, it just doesn't care about "should;" it cares about "clipperould."

Likewise, we wouldn't disagree that keeping people around instead of making them into paperclips is not in harmony with maximizing paperclips, we just wouldn't care. We know we clipperould turn people into paperclips, we just don't care about clipperould, we care about should.

Comment author: entirelyuseless 15 October 2016 04:04:27PM 0 points [-]

No, I am not using "should" to mean "feels..." anything (in other words, feelings have nothing to do with it.) But you are right about compulsion. The word "ought" is, in theory, just the past tense of "owe", and what is owed is something that needs to be paid. Saying that you ought to do something, just means that you need to do it. And should is the same; that you should do it just means that there is a need for it. And need is just necessity. So it does all have to do with compulsion.

But it is not compulsion of feelings, but of a goal. And to that degree, your idea is actually correct. But you are wrong to say that the specific goal sought affects the meaning of the word. "I should do it" means that I need to do it to attain my goal. It does not say what that goal is.

Comment author: TheAncientGeek 13 October 2016 06:30:32PM *  0 points [-]

it seems you have not understood the idea. Were there any parts of the the post that seemed unclear that you think I might make clearer?

Almost everything. You explain morality by putting forward one theory. Under those circumstances, most people would expect to see some critique of other theories, and explanation of why your theory is the One True Theory. You don't do the first, and it is not clear that you are even trying to do the second.

Because the whole point is that to say something is moral = you should do it = it is valued according to the morality equation.

And to say that only humans have morality. But if there is something the Elves should do, then morality applies to them., contradicting that claim.

For an Elf to agree something is moral is also to agree that they should do it. When I say they agree it's moral and don't care, that also means they agree they should do it and don't care.

That doesn't help. For one thing, humans don't exactly want to be moral...their moral fibre has to be buttressed bty various punishments and rewards. For another "should" and "want to" are not synonyms..but "moral" and "what you should do" are. So if there is something the Elves should do, at that point you have established that morality applies to the Elves, and the fact that they don't want to do it is a side-issue. (And of course they could tweak their own motivations by constructing punishments and rewards).

Something being Christmas Spiritey = you Spiritould do it. Humans might agree that something is Christmas Spirit-ey, and agree that they spiritould do it, they just don't care about what they spiritould do, they only care about what they should do.

OK. Now you seem to be saying..without quite making it quite explicit of course, ..that morality is by definition unique to humans, because the word "moral" just labels what motivates humans, in the way that "Earth" or "Terra" labels the planet where humans live. That claim isn't completely incomprehensible, it's just strange and arbitrary, and what is considerably strange is the way you feel no need to defend it against alternative theories -- the main alternative being that morality is multiply instantiable, that other civilisations could have their own versions. like they have their own versions , in the way they could have their own versions of houses or money.

You state it as though it is obvious, yet it has gone unnoticed for thousands of years.

Suppose I were to announce that dark matter is angels' tears. Doesn't it need some expansion? That's how your claim reads, that' the outside view.

Obligatory is just a kind of "should." Elves agree that some things are obligatory, and don't care, they care about what's ochristmastory.

Obligatory is a kind of "should" *that shouldn't be overridden by other considerations. (A failure to do what is obligatory is possible, of course, but it is important to remember that it is seen as a lapse, as something wrong, not a valid choice). Yet the Elves are overriding it, casting doubt on whether they have actually understood the concept of "obligatory"

Likewise, to say that today's morality equation is the "best" is to say that today's morality equation is the equation which is most like today's morality equation. Tautology.

Since anyone can say that at any time, that breaks the meaning of "best", which is supposed to pick out something unique. That would be a reductio ad absurdum of your own theory.

Comment author: Bound_up 15 October 2016 12:47:01PM -1 points [-]

No, no, no...

Every possible creature, and every process of physics SHOULD do XYZ. But practically nothing is moved by that fact.

This sentence means: It is highly valued in the morality equation for XYZ to be the state of affairs, independently of who/what causes it to be so.

Likewise, everything Spiritould do ABC, but only Elves are moved by that fact.

These are objective equations which apply to everything. To say should, spiritould, clipperould, etc., is just to say about different things that they are valued by this equation or that one. It's an objective truth that they are valued by this equation or that one.

It's just that humans are not moved by almost any of the possible equations. They ARE moved by the morality equation.

Humans and Elves should AND spiritould do whatever. They are both equally obligated and ochristmasated. But one species finds one of those facts moving and not the other, and the other finds the other moving and not the one.

Perhaps now it is clear?

Comment author: TheAncientGeek 16 October 2016 12:49:07PM *  1 point [-]

It is not a clear expression of something that can be seen to work

Version 1.

I am obligated to both do and not do any number of acts by any number of shouldness-equations

If that is the case, anything resembling objectivism is out of the window. If I am obligate to do X, and I do X, then my action is right. If I am obligated not do to X, and I do X, my action is wrong. if I am both obligated and not obligated to do X, then my action is somehow both right and wrong..that is, it has no definite moral status.

But that's not quite what you were saying.

Version 2.

There are lots of different kinds of morality, but I am only obligated by human morality.

That would work, but it's not what you mean. You are explicitly embracing...

Version 3.

There are lots of different kinds of morality, but I am only motivated by human morality

There's only one word of difference between that and version 2, which is the substitution of "motivated" for "obligated". As we saw under version 1, it's the existence of multiple conflicting obligations which stymies ethical objectivism. And motivation can't fix that problem, because it is a different thing to obligation. In fact it is orthogonal, because:

You can be motivated to do what you are not obligated to do. You can be obligated to d what your are not motivated to do. Or both. Or neither.

Because of that, version 3 implies version 1, and has the same problem.

Comment author: entirelyuseless 16 October 2016 03:39:44PM 0 points [-]

All of this is why Eliezer's morality sequence is wrong. Version 2 is basically right. The Baby-Eaters were not immoral, but moral, but according to a different morals. That is not subjectivism, because it is an objective fact that Baby-Eaters are what they are, and are obligated by Baby-Eater morality, and humans are humans, and are obligated by human morality.

But Eliezer (and Bound-Up) do not admit this, nonsensically asserting that non-humans should be obligated by human morality.

Comment author: MrMind 17 October 2016 01:48:03PM *  0 points [-]

To be honest, Eliezer made a slightly different argument:
1) humans share (because of evolution) a psychological unity that is not affected by regional or temporal distinctions;
2) this unity entails a set of values that is inescapable for every human beings, its collective effect on human cognition and actions we dub "morality";
3) Clippy, Elves and Pebblesorters, being fundamentally different, share a different set of values that guide their actions and what they care about;
4) those are perfectly coherent and sound for those who entertain them, we should though do not call them "Clippy's, Elves' or Pebblesorters' morality", because words should be used in such a way to maximize their usefulness in carving reality: since we cannot go out of our programming and conceivably find ourselves motivated by eggnog or primality, we should not use the term and instead use primality or other words.
That's it: you can debate any single point, but I think the difference is only formal. The underlying understanding, that "motivating set of values" is a two place predicate, is the same, Yudkowski preferred though to use different words for different partially applied predicates, on the grounds of point 1 and 4.

Comment author: entirelyuseless 17 October 2016 02:10:18PM 0 points [-]

"words should be used in such a way to maximize their usefulness in carving reality"

That does not mean that we should not use general words, but that we should have both general words and specific words. That is why it is right to speak of morality in general, and human morality in particular.

As I stated in other replies, it is not true that this disagreement is only about words. In general, when people disagree about how words should be used, that is because they disagree about what should be done. Because when you use words differently, you are likely to end up doing different things. And I gave concrete places where I disagree with Eliezer about what should be done, ways that correspond to how I disagree with him about morality.

In general I would describe the disagreement in the following way, although I agree that he would not accept this characterization: Eliezer believes that human values are intrinsically arbitrary. We just happen to value a certain set of things, and we might have happened to value some other random set. In whatever situation we found ourselves, we would have called those things "right," and that would have been a name for the concrete values we had.

In contrast, I think that we value the things that are good for us. What is "good for us" is not arbitrary, but an objective fact about relationships between human nature and the world. Now there might well be other rational creatures and they might value other things. That will be because other things are good for them.

Comment author: Bound_up 16 October 2016 10:17:48PM *  0 points [-]

They eat innocent, sentient beings who suffer and are terrified because of it. That's wrong, no matter who does it.

It may not be un-baby-eater-ey, but it's wrong.

Likewise, not eating babies is un-baby-eater-ey, no matter who does it. It might not be wrong, but it is un-baby-eater-ey.

We have two species who agree on the physical effects of certain actions. One species likes the effects of the action, and the other doesn't. The difference between them is what they value.

"Right" just means "in harmony with this set of values." Baby-eater-ey means "in harmony with this other set of values."

There's no contradiction in saying that something can be in harmony with one set of values and not in harmony with another set of values. Hence, there's no contradiction in saying that eating babies is wrong, and is also baby-eater-ey. You can also note that the action is found compelling by one species and not compelling by another, and there is no contradiction in this, either.

What could "right" mean if we have "right according to these morals" AND "right according to these other, contradictory morals?"

I see one possibility: "right" is taken to mean " in harmony with any set of values." Which, of course, makes it meaningless. Do you see another possibility?

Comment author: entirelyuseless 17 October 2016 06:37:46AM 0 points [-]

I disagree that it is wrong for them to do that. And this is not just a disagreement about words: I disagree that Eliezer's preferred outcome for the story is better than the other outcome.

"Right" is just another way of saying "good", or anyway "reasonably judged to be good." And good is the kind of thing which naturally results in desire. Note that I did not say it is "what is desired" any more than you want to say that someone values at a particular moment is necessarily right. I said it is what naturally results in desire. This definition is in fact very close to yours, except that I don't make the whole universe revolve around human beings by saying that nothing is good except what is good for humans. And since different kinds of things naturally result in desire for different kinds of beings (e.g. humans and babyeaters), those different things are right for different kinds of beings.

That does not make "right" or "good" meaningless. It makes it relative to something. And this is an obvious fact about the meaning of the words; to speak of good is to speak of what is good for someone. This is not subjectivism, since it is an objective fact that some things are good for humans, and other things are good for other things.

Nor does this mean that right means "in harmony with any set of values." It has to be in harmony with some real set of values, not an invented one, nor one that someone simply made up -- for the same reasons that you do not allow human morals to be simply invented by a random individual.

Returning to the larger point, as I said, this is not just a disagreement about words, but about what is good. People maintaining your theory (like Eliezer) hope to optimize the universe for human values. I have no such hope, and I think it is a perverse idea in the first place.

Comment author: Bound_up 16 October 2016 10:20:08PM *  -1 points [-]

If you are interested, I might recommend trying to write up what you think this idea is, and see if you find any holes in your understanding that way. I'm not sure how to make it any clearer right now, but, for what it's worth, you have my word that you have not understood the idea.

We are not disagreeing about something we both understand; you are disagreeing with a series of ideas you think I hold, and I am trying to explain the original idea in a way that you find understandable and, apparently, not yet succeeding.