Strong moral realism, meta-ethics and pseudo-questions.

18 [deleted] 31 January 2010 08:20PM

On Wei_Dai's complexity of values post, Toby Ord writes:

There are a lot of posts here that presuppose some combination of moral anti-realism and value complexity. These views go together well: if value is not fundamental, but dependent on characteristics of humans, then it can derive complexity from this and not suffer due to Occam's Razor.

There are another pair of views that go together well: moral realism and value simplicity. Many posts here strongly dismiss these views, effectively allocating near-zero probability to them. I want to point out that this is a case of non-experts being very much at odds with expert opinion and being clearly overconfident. In the Phil Papers survey for example, 56.3% of philosophers lean towards or believe realism, while only 27.7% lean towards or accept anti-realism.

The kind of moral realist positions that apply Occam's razor to moral beliefs are a lot more extreme than most philosophers in the cited survey would sign up to, methinks. One such position that I used to have some degree of belief in is:

Strong Moral Realism: All (or perhaps just almost all) beings, human, alien or AI, when given sufficient computing power and the ability to learn science and get an accurate map-territory morphism, will agree on what physical state the universe ought to be transformed into, and therefore they will assist you in transforming it into this state.

But most modern philosophers who call themselves "realists" don't mean anything nearly this strong. They mean that that there are moral "facts", for varying definitions of "fact" that typically fade away into meaninglessness on closer examination, and actually make the same empirical predictions as antirealism.

Suppose you take up Eliezer's "realist" position. Arrangements of spacetime, matter and energy can be "good" in the sense that Eliezer has a "long-list" style definition of goodness up his sleeve, one that decides even contested object-level moral questions like whether abortion should be allowed or not, and then tests any arrangement of spacetime, matter and energy and notes to what extent it fits the criteria in Eliezer's long list, and then decrees goodness or not (possibly with a scalar rather than binary value).

This kind of "moral realism" behaves, to all extents and purposes, like antirealism.

  • You don't favor shorter long-list definitions of goodness over longer ones. The criteria for choosing the list have little to do with its length, and more with what a human brain emulation with such-and-such modifications to make it believe only and all relevant true empirical facts would decide once it had reached reflective moral equilibrium.
  • Agents who have a different "long list" definition cannot be moved by the fact that you've declared your particular long list "true goodness".
  • There would be no reason to expect alien races to have discovered the same long list defining "true goodness" as you.
  • An alien with a different "long list" than you, upon learning the causal reasons for the particular long list you have, is not going to change their long list to be more like yours.
  • You don't need to use probabilities and update your long list in response to evidence, quite the opposite, you want it to remain unchanged. 

I might compare the situation to Eliezer's blegg post: it may be that moral philosophers have a mental category for "fact" that seems to be allowed to have a value even once all of the empirically grounded surrounding concepts have been fixed. These might be concepts such as "would aliens also think this thing?", "Can it be discovered by an independent agent who hasn't communicated with you?", "Do we apply Occam's razor?", etc.

Moral beliefs might work better when they have a Grand Badge Of Authority attached to them. Once all the empirically falsifiable candidates for the Grand Badge Of Authority have been falsified, the only one left is the ungrounded category marker itself, and some people like to stick this on their object level morals and call themselves "realists".

Personally, I prefer to call a spade a spade, but I don't want to get into an argument about the value of an ungrounded category marker. Suffice it to say that for any practical matter, the only parts of the map we should argue about are parts that map-onto a part of the territory. 

Comments (172)

Comment author: ShardPhoenix 01 February 2010 10:28:29AM *  4 points [-]

This would all be a lot clearer if, in these sorts of discussions, we avoided using the dangling "should".

In other words, don't just say that "X should do Y", say that "X should do Y, in order for some specifiable condition to be fulfilled". That condition could be their preferences, your preferences, CEV's preferences if you believe in such a thing, or whatever. Oh yeah, and "...in order to be moral" is ambiguous and thus doesn't count.

Comment author: Eliezer_Yudkowsky 31 January 2010 08:31:18PM 5 points [-]

I think there's an ambiguity between "realism" in the sense of "these statements I'm making about 'what's right' are answers to a well-formed question and have a truth value" and "the subject matter of moral discourse is a transcendent ineffable stuff floating out there which compels all agents to obey and which could make murder right by having a different state". Thinking that moral statements have a truth value is cognitivism, which sounds much less ambiguous to me, and that's why I prefer to talk about moral cognitivism rather than moral realism.

As a moral cognitivist, I would look at your diagram and disagree that the Baby-Eating Aliens and humans have different views of the same subject matter, rather, we and they are talking about a different subject matter and it is an error of the computer translation programs that the word comes out as "morality" in both cases. Morality is about how to save babies, not eat them, everyone knows that and they happen to be right. If we could get past difficulties of the translation, the babyeaters would agree with us about what is moral, we would agree with them about what is babyeating, and we would agree about the physical fact that we find different sorts of logical facts to be compelling.

I have a pending post-to-write on how, to the best of my knowledge, there are only two sorts of things that can make a proposition "true", namely physical events and logical implications, and of course mixtures of the two. I mention this because we have a legitimate epistemic preference for simpler hypotheses about the causes of physical events, but no such thing as an epistemic preference for "simpler axioms" when we are talking about logical facts. We may have an aesthetic preference for simpler axioms in math, but that is not the same thing. If there's no preference for simpler assumptions, that doesn't mean the issue is not a factual one, but it may suggest that we are dealing with logical facts rather than physical facts (statements which are made true by which conclusions follow from which premises, rather than the state of a causal event).

Added: Since I have a definite criterion for something being a "fact", I defend the notion of fact-ness against the charge of being a floating extra.

Comment author: gregconen 01 February 2010 10:30:17PM 7 points [-]

Morality is about how to save babies, not eat them, everyone knows that and they happen to be right. If we could get past difficulties of the translation, the babyeaters would agree with us about what is moral, we would agree with them about what is babyeating, and we would agree about the physical fact that we find different sorts of logical facts to be compelling.

This simply pushes the problem back one level, by making the word "morality" descriptive instead of normative. Morality is X and babyeating is Y. But how should one choose between morality and babyeating? Now, instead of a moral anti-realist, I'm a moral realist, a babyeating realist, and normative judgement anti-realist.

Comment author: Wei_Dai 02 February 2010 05:59:09AM *  6 points [-]

But how should one choose between morality and babyeating?

Channeling my inner Eliezer, the answer is obviously that you should choose morality (since "should" is just "morality" as a verb).

Now, instead of a moral anti-realist, I'm a moral realist, a babyeating realist, and normative judgement anti-realist.

No, because normative judgement = morality.

This is almost starting to make sense, except... Suppose I say this to a babyeater: "We should sign a treaty banning the development and use of antimatter weapons." What could that possibly mean? Or if one murderer says to another "We should dump the body in the river." he is simply stating a factual falsehood?

I wonder if this is a good summary of our disagreement with Eliezer:

  1. His proposed definitions of "morality" and especially "should" and "ought" are objectionable. They are just not what we mean when we use those words.
  2. He classifies his metaethics as realism whereas we would classify it as anti-realism.

Out of these two, 1 is clearly both a bigger problem and where Eliezer is more obviously wrong. I really don't understand why he sticks to his position there.

Comment author: Vladimir_Nesov 02 February 2010 10:44:57PM *  3 points [-]

This is almost starting to make sense, except... Suppose I say this to a babyeater: "We should sign a treaty banning the development and use of antimatter weapons." What could that possibly mean?

Supposedly the acceptable plan would both be the right thing to do and the babyeating thing to do at the same time: right given the presence and influence of babyeaters, and babyeating given the presence and influence of humans. So, when it is said, "Let us sign this treaty.", humans sign it, because it should be done, and babyeaters also do so, because it's a babyeating thing to do. The contract is chosen to compel both parties.

Comment author: Wei_Dai 03 February 2010 04:25:41AM 2 points [-]

I agree with your explanation of the intended semantics of the sentence, which is also my explanation. What I disagree with is the suggestion that we denote that meaning using "Let us sign this treaty." instead of "We should sign this treaty." I believe the intended meaning is more naturally expressed using the second sentence, and trying to redefine the word "should" so that the second sentence means something else and we're forced to use the first sentence to express the same meaning, is wrong.

Also, since the first sentence is imperative instead of declarative, I'm not sure that it doesn't mean something else already, so that now you're hijacking two words instead of one.

Comment author: torekp 07 February 2010 09:19:48PM 0 points [-]

There can be a separable sense of "should" that indicates rationality. Thus, "we should sign the treaty" can be an interesting truth for both parties when the "should" is that of rationality, and true for both parties but only interesting from the human side when the "should" is a moral should.

This commits one to what philosophers call moral externalism, namely, the view that what is morally required is not necessarily rationally required. Which is not a reason to reject the view, but I expect it will be criticized.

Comment author: Douglas_Knight 02 February 2010 08:52:02AM 1 point [-]

He characterizes his metaethics as realism whereas we would characterize it as anti-realism.

Where does he characterize it as realism? When he chooses the word, he always chooses "cognitivism"; if someone else says "realism," he doesn't object, but he makes sure to define it to match cognitivism and indicates that there other notions of realism that he doesn't endorse.

Comment author: Wei_Dai 02 February 2010 10:59:29AM 0 points [-]

Thanks for pointing out the error. I changed it to "classify".

Comment author: TheAncientGeek 29 May 2014 02:51:04PM *  0 points [-]

Should has many meanings. Which moral system I believe in is meta level, not object level and probably implies an epistemic-should or rational-should rather than moral-should.

Likewise, not all normative judgement is morality. What you should do to maximise personal pleasure, .lor make money, or "win" in some way , is generally not what you morally-should.

Comment author: Furcas 31 January 2010 09:18:49PM *  11 points [-]

I think it would do us all a lot of good (and it would be a lot clearer) to use the word 'morality' to mean all the implications that follow from all terminal values, much as we use the word 'mathematics' to mean all the theorems that follow from all axioms. This would force us to specify which kind of morality we're talking about.

For example, it would be meaningless to ask if I should steal from the rich. It would only be meaningful to ask if I me-should steal from the rich (i.e. if it follows from my terminal values), or if I you-should steal from the rich (i.e. if it follows from your terminal values), or if I us-should steal from the rich (i.e. if it follows from the terminal values we share), or if I Americans-should steal from the rich (i.e. if it follows from the terminal values that Americans share), etc.

I know I'm not explaining anything you don't already know, Eliezer; my point is that your use of the words 'morality' and 'should' has been confusing quite a few people. Or perhaps it would be more accurate to say that your use of those words has failed to extirpate certain people from their pre-existing confusion.

Comment author: Eliezer_Yudkowsky 31 January 2010 11:13:57PM 10 points [-]

But then morality does not have as its subject matter "Life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment, etc.; truth; knowledge and true opinions of various kinds, understanding, wisdom; beauty, harmony, proportion in objects contemplated; aesthetic experience; morally good dispositions or virtues; mutual affection, love, friendship, cooperation; just distribution of goods and evils; harmony and proportion in one's own life; power and experiences of achievement; self-expression; freedom; peace, security; adventure and novelty; and good reputation, honor, esteem, etc."

Instead, it has primarily as its subject matter a list of ways to transform the universe into paperclips, cheesecake, needles, orgasmium, and only finally, a long way down the list, into eudaimonium.

I think this is not the subject matter that most people are talking about when they talk about morality. We should have a different name for this new subject, like "decision theory".

Comment author: Matt_Simpson 01 February 2010 11:45:44PM *  2 points [-]

But then morality does not have as its subject matter....

I think you can keep that definition: define morality and morality-human. However, at least in the metaethics sequence, it would have done a lot of good to distinguish between morality-Joe and morality-Jane even if you were eventually going to argue that the two were equivalent. Once you're finished arguing that point, however, go on using the term "morality" the way you want to.

I only say this because of my own experience. I didn't really understand the metaethics sequence when I first read it. I was also struggling with Hume at the time, and it was actually that struggle that led me to make the connection between what an agent "should" do and decision theory. Only later I realized that was exactly what you were doing, and I chalk part of it up to confusing terminology. If you dig through some of the original posts, I was (one of many?) confusing your arguments for classical utilitarianism.

On the other hand, I may not be representative. I'm used to thinking of agent's utility functions through economics, so the leap to should-X/morality-X connected to X's utility function was a small one, relatively speaking.

Comment author: Furcas 31 January 2010 11:42:16PM *  8 points [-]

I think this is not the subject matter that most people are talking about when they talk about morality.

True, as long as they're talking about the stuff that is implied by their terminal values.

However, when they start talking about the stuff that is implied by other people's (or aliens', or AIs') terminal values, the meaning they attach to the word 'morality' is a lot closer to the one I'm proposing. They might say things like, "Well, female genital mutilation is moral to Sudanese people. Um, I mean, errr, uh...", and then they're really confused. This confusion would vanish (or at least, would be more likely to vanish) if they were forced to say, "Well, female genital mutilation is Sudanese-moral but me-immoral."

Ideally, to avoid all confusion we should get rid of the word morality completely, and have everyone speak in terms of goals and desires instead.

Comment author: Jordan 01 February 2010 06:39:11AM *  3 points [-]

Agreed. If it happened that there were only a few different sets of terminal values in existence, then I would be OK with assigning different words to the pursuit of those different sets. One of those words could be 'moral'. However, as is, the set of all terminal values represented by humans is too fractured and varied.

A large chunk of the list Eliezer provides in the above comment probably is nearly universal to humanity, but the entire list is not, and there are certainly many disputes on the relative ordering (especially as to what is on top).

Comment author: byrnema 31 January 2010 09:43:18PM *  4 points [-]

I thought there was no way I could ever understand what Eliezer had written, but you've provided a clue. Should I translate this:

Morality is about how to save babies, not eat them, everyone knows that and they happen to be right. If we could get past difficulties of the translation, the babyeaters would agree with us about what is moral, we would agree with them about what is babyeating, and we would agree about the physical fact that we find different sorts of logical facts to be compelling.

as this?

Human-morality is about how to save babies, not eat them, everyone knows that and they happen to be right. If we could get past difficulties of the translation, the babyeaters would agree with us about what is human-moral, we would agree with them about what is babyeating-moral, and we would agree about the physical fact that we find different sorts of logical facts to be compelling.

Also, what was especially perplexing, translate:

"What should be done with the universe" invokes a criterion of preference, "should", which compels humans but not Babyeaters. If you look at the fact that the Babyeaters are out trying to make a different sort of universe [...] They do the babyeating thing, we do the right thing;

as:

"What should be done with the universe" invokes a criterion of preference, "human-should", which compels humans but not Babyeaters. If you look at the fact that the Babyeaters are out trying to make a different sort of universe [...] They do the babyeating-right thing, we do the human-right thing; ?

Comment author: Furcas 31 January 2010 10:18:32PM *  3 points [-]

Should I translate this: [...] as this? [...]

Yes.

Also, what was especially perplexing, translate: "[...] as: [...] ?

Yes!

Comment author: Eliezer_Yudkowsky 31 January 2010 11:14:15PM 1 point [-]

No. See other replies.

Comment author: Rain 09 February 2010 07:53:28PM *  1 point [-]

You're wrong. Despite how much I'd like to have a universal, ultimate, true morality, you can't create it out of whole cloth by defining it as "what-humans-value". That's pretending there's no reason to look up, because, "Look! It's right there in front of you. So be sure not to look up."

Comment author: nolrai 02 February 2010 10:22:59PM 3 points [-]

See I think you miss understanding his response. I mean that is the only way I can interpret it to make sense.

Your insistence that it is not the right interpretation is very odd. I get that you don't want to trigger peoples cooperation instincts, but thats the only framework in which talking about other beings makes sense.

The morality you are talking about is the human-now-extended morality, (well closer to the less-wrong-now-extended morality) in that it is the morality that results from extending from the values humans currently have. Now you seem to have a categorization that need to categorize your own morality as different from others in order to feel right about imposing it? So you categorize it as simply morality, but your morality is is not necessarily my morality and so that categorization feels iffy to me. Now its certainly closer to mine then to the baby eaters, but I have no proof it is the same. Calling it simply Morality papers over this.

Comment author: Furcas 01 February 2010 12:05:39AM *  4 points [-]

I understand and agree with your point that the long list of terminal values that most humans share aren't the 'right' ones because they're values that humans have. If Omega altered the brain of every human so that we had completely different values, 'morality' wouldn't change.

Therefore, to be perfectly precise, byrnema would have to edit her comment to substitute the long list of values that humans happen to share for the word 'human', and the long list of values that Babyeaters happen to share for the word 'babyeating'.

So yeah, I get why someone who doesn't want to create this kind of confusion in his interlocutors would avoid saying "human-right" and "human-moral". The problem is that you're creating another kind of confusion.

Comment author: byrnema 01 February 2010 12:37:40AM 1 point [-]

If Omega altered the brain of every human so that we had completely different values, 'morality' wouldn't change.

Is this because morality is reserved for a particular list - the list we currently have -- rather than a token for any list that could be had?

Comment author: Furcas 01 February 2010 12:49:32AM 2 points [-]

It's because [long list of terminal values that current humans happen to share]-morality is defined by the long list of terminal values that current humans happen to share. It's not defined by the list of terminal values that post-Omega humans would happen to have.

Is arithmetic "reserved for" a particular list of axioms or for a token for any list of axioms? Neither. Arithmetic is its axioms and all that can be computed from them.

Comment author: komponisto 31 January 2010 09:12:59PM 11 points [-]

I think there's an ambiguity between "realism" in the sense of "these statements I'm making are answers to a well-formed question and have a truth value" and "morality is a transcendent ineffable stuff floating out there which compels all agents to obey and could make murder right by having a different state".

Yes -- and the important thing to remember is that the second view, which all of us here agree is silly, is the naive, common-sense human view. It's what people are automatically going to think you're talking about if you go around shouting "Yes Virginia, there are moral facts after all!"

Meanwhile, the general public has a term for the view that you and I share: they call it "moral relativism".

I don't recall exactly, and I haven't yet bothered to look it up, but I believe when you first introduced your metaethics, there were people (myself among them, I think), who objected, not to your actual meta-ethical views, but to the way that you vigorously denied that you were a "relativist"; and you misunderstood them/us as objecting to your theory itself (I think you maybe even threw in an accusation of not comprehending the logical subtleties of Loeb's Theorem).

What makes the theory relativist is simply the fact that it refers explicitly to particular agents -- humans. Thus, it is automatically subject to the "chauvinism" objection with respect to e.g. Babyeaters: we prefer one thing, they prefer another -- why should we do what we prefer rather than what they prefer? The correct answer is, of course, "because that's what we prefer". But people find that answer unpalatable -- and one reason they might is because it would seem to imply that different human cultures should similarly run right over each other if they don't think they share the same values. Now, we may not like the term "relativism", but it seems to me that this "chauvinism" objection is one that you (and I) need to take at least somewhat seriously.

Comment author: Eliezer_Yudkowsky 31 January 2010 10:52:38PM 4 points [-]

Yes -- and the important thing to remember is that the second view, which all of us here agree is silly, is the naive, common-sense human view.

No, it's not. The naive, common-sense human view is that sneaking into Jane's tent while she's not there and stealing her water-gourd is "wrong". People don't end up talking about transcendent ineffable stuff until they have pursued bad philosophy for a considerable length of time. And the conclusion - that you can make murder right without changing the murder itself but by changing a sort of ineffable stuff that makes the murder wrong - is one that, once the implications are put baldly, squarely disagrees with naive moralism. It is an attempt to rescue a naive misunderstanding of the subject matter of mind and ontology, at the expense of naive morality.

What makes the theory relativist is simply the fact that it refers explicitly to particular agents -- humans

I agree that this constitutes relativism, and deny that I am a relativist.

why should we do what we prefer rather than what they prefer? The correct answer is, of course, "because that's what we prefer".

See above. The correct answer is "Because children shouldn't die, they should live and be happy and have fun." Note the lack of any reference to humans - this is the sort of logical fact that humans find compelling, but it is not a logical fact about humans. It is a physical fact that I find that logic compelling, but this physical fact is not, itself, the sort of fact that I find compelling.

This is the part of the problem which I find myself unable to explain well to the LessWrongians who self-identify as moral non-realists. It is, admittedly, more subtle than the point about there not being transcendent ineffable stuff, but still, there is a further point and y'all don't seem to be getting it...

Comment author: komponisto 01 February 2010 01:15:37AM *  8 points [-]

I agree that this constitutes relativism, and deny that I am a relativist.

It looks to me like the opposing position is not based on disagreement with this point but rather outright failure to understand what is being said.

I have the same feeling, from the other direction.

I feel like I completely understand the error you're warning against in No License To Be Human; if I'm making a mistake, it's not that one. I totally get that "right", as you use it, is a rigid designator; if you changed humans, that wouldn't change what's right. Fine. The fact remains, however, that "right" is a highly specific, information-theoretically complex computation. You have to look in a specific, narrow region of computation-space to find it. This is what makes you vulnerable to the chauvinism charge; there are lots of other computations that you didn't decide to single out and call "right", and the question is: why not? What makes this one so special? The answer is that you looked at human brains, as they happen to be constituted, and said, "This is a nice thing we've got going here; let's preserve it."

Yes, of course that doesn't constitute a general license to look at the brains of whatever species you happen to be a member of to decide what's "right"; if the Babyeaters or Pebblesorters did this, they'd get the wrong answer. But that doesn't change the fact that there's no way to convince Babyeaters or Pebblesorters to be interested in "rightness" rather than babyeating or primaility. It is this lack of a totally-neutral, agent-independent persuasion route that is responsible for the fundamentally relative nature of morality.

And yes, of course, it's a mistake to expect to find any argument that would convince every mind, or an ideal philosopher of perfect emptiness -- that's why moral realism is a mistake!

Comment author: ciphergoth 31 January 2010 11:19:20PM *  1 point [-]

I promise to take it seriously if you need to refer to Löb's theorem in your response. I once understood your cartoon guide and could again if need be.

If we concede that when people say "wrong", they're referring to the output of a particular function to which we don't have direct access, doesn't the problem still arise when we ask how to identify what function that is? In order to pin down what it is that we're looking for, in order to get any information about it, we have to interview human subjects. Out of all the possible judgment-specifying functions out there, what's special about this one is precisely the relationship humans have with it.

Comment author: Nick_Tarleton 31 January 2010 09:48:51PM *  3 points [-]

Yes -- and the important thing to remember is that the second view, which all of us here agree is silly, is the naive, common-sense human view. It's what people are automatically going to think you're talking about if you go around shouting "Yes Virginia, there are moral facts after all!"

Agreed that this is important. (ETA: I now think Eliezer is right about this.)

Meanwhile, the general public has a term for the view that you and I share: they call it "moral relativism".

We believe (a) that there is no separable essence of goodness, but also (b) that there are moral facts that people can be wrong about. I think the general public understands "moral relativism" to exclude (b), and I don't think there's any short term in common (not philosophical) usage that includes the conjunction of (a) and (b).

What makes the theory relativist is simply the fact that it refers explicitly to particular agents -- humans.

Eliezer doesn't define morality in terms of humans; he defines it (as I understand) in terms of an objective computation that happens to be instantiated by humans. See No License to be Human.

Comment author: komponisto 31 January 2010 10:05:26PM 4 points [-]

We believe (a) that there is no separable essence of goodness, but also (b) that there are moral facts that people can be wrong about. I think the general public understands "moral relativism" to exclude (b)

I think that's uncharitable to the public: surely everyone should admit that people can be mistaken, on occasion, about what they themselves think. A view that holds that nothing that comes out of a person's mouth can ever be wrong is scarcely worth discussing.

Eliezer doesn't define morality in terms of humans; he defines it (as I understand) in terms of an objective computation that happens to be instantiated by humans.

The fact that this computation just so happens to be instantiated by humans and nothing else in the known universe cannot be a coincidence; surely there's a causal relation between humans' instantiating the computation and Eliezer's referring to it.

Comment author: Alicorn 31 January 2010 10:30:16PM 9 points [-]

surely everyone should admit that people can be mistaken, on occasion, about what they themselves think.

This is far from uncontroversial in the general population.

Comment author: Eliezer_Yudkowsky 31 January 2010 11:05:52PM 4 points [-]

surely there's a causal relation between humans' instantiating the computation and Eliezer's referring to it.

Of course there's a causal relation which explains the causal fact of this reference, but this causal explanation is not the same as the moral justification, and it's not appealed to as the moral justification. We shouldn't save babies because-morally it's the human thing to do but because-morally it's the right thing to do. What physically causes us to save the babies is a combination of the logical fact that saving babies is the right thing to do, and the physical fact that we are compelled by those sorts of logical facts. What makes saving the baby the right thing to do is a logical fact about the subject matter of rightness - in this case, a pretty fast and primitive implication from the premises that are baked into that subject matter and which distinguish it from the subject matter of wrongness. The physical fact that humans are compelled by these sorts of logical facts is not one of the facts which makes saving the baby the right thing to do. If I did assert that this physical fact was involved, I would be a moral relativist and I would say the sorts of other things that moral relativists say, like "If we wanted to eat babies, then that would be the right thing to do."

Comment author: Tyrrell_McAllister 31 January 2010 11:36:40PM *  15 points [-]

The physical fact that humans are compelled by these sorts of logical facts is not one of the facts which makes saving the baby the right thing to do. If I did assert that this physical fact was involved, I would be a moral relativist and I would say the sorts of other things that moral relativists say, like "If we wanted to eat babies, then that would be the right thing to do."

The moral relativist who says that doesn't really disagree with you. The moral relativist considers a different property of algorithms to be the one that determines whether an algorithm is a morality, but this is largely a matter of definition.

For the relativist, an algorithm is a morality when it is a logic that compels an agent (in the limit of reflection, etc.). For you, an algorithm is a morality when it is the logic that in fact compels human agents (in the limit of reflection, etc.). That is why your view is a kind of relativism. You just say "morality" where other relativists would say "the morality that humans in fact have".

You also seem more optimistic than most relativists that all non-mutant humans implement very nearly the same compulsive logic. But other relativists admit that this is a real possibility, and they wouldn't take it to mean that they were wrong to be relativists.

If there is an advantage to the relativists' use of "morality", it is that their use doesn't prejudge the question of whether all humans implement the same compulsive logic.

Comment author: Rain 09 February 2010 08:31:12PM *  1 point [-]

I agree with this comment and feel that it offers strong points against Eliezer's way of talking about this issue.

Comment author: komponisto 01 February 2010 01:37:58AM *  3 points [-]

Of course there's a causal relation which explains the causal fact of this reference, but this causal explanation is not the same as the moral justification, and it's not appealed to as the moral justification

Of course it isn't, because we're doing meta-ethics here, and don't yet have access to the notion of "moral justification"; we're in the process of deciding which kinds of things will be used as "moral justification".

It's your metamorality that is human-dependent, not your morality; see my other comment.

Comment author: Eliezer_Yudkowsky 01 February 2010 01:44:33AM *  3 points [-]

Now I'm confused. I don't understand how you can have preferences that you use to decide what ought to count as a "moral justification" without already having a moral reference frame.

Since we don't have conscious access to our premises, and we haven't finished reflecting on them, we sometimes go around studying our own conclusions in an effort to discover what counts as a moral justification, but that's not like a philosopher of pure emptiness constructing justificationness from scratch and appeal to some mysterious higher criterion. (Bearing in mind that when someone offers me a higher criterion, it usually ends up looking pretty uninteresting.)

Comment author: TheAncientGeek 29 May 2014 03:01:09PM *  1 point [-]

That would be epistemic preferences. It's epistemology (and allied fields, like logic and rationality) thatreally runs into circularity problems.

Comment author: komponisto 01 February 2010 03:53:06AM *  6 points [-]

I don't understand how you can have preferences that you use to decide what ought to count as a "moral justification" without already having a moral reference frame.

Well, consider an analogy from mathematical logic: when you write out a formal proof that 2+2 = 4, at some point in the process, you'll end up concatenating two symbols here and two symbols there to produce four symbols; but this doesn't mean you're appealing to the conclusion you're trying to prove in your proof; it just so happens that your ability to produce the proof depends on the truth of the proposition.

Similarly, when an AI with Morality programmed into it computes the correct action, it just follows the Morality algorithm directly, which doesn't necessarily refer explicitly to "humans" as such. But human programmers had to program the Morality algorithm into the AI in the first place; and the reason they did so is because they themselves were running something related to the Morality algorithm in their own brains. That, as you know, doesn't imply that the AI itself is appealing to "human values" in its actual computation (the Morality program need not make such a reference); but it does imply that the meta-ethical theory used by the programmers compelled them to (in an appropriate sense) look at their own brains to decide what to program into the AI.

Comment author: byrnema 01 February 2010 12:01:29AM *  4 points [-]

I agree that it seems as though I just don't understand. Sometimes, I feel perched on the edge of understanding, feel a little dizzy, and decide I don't understand.

I don't claim to be representative in any way, but my stumbling block seems to be this idea about how saving babies is right. Since I don't feel strongly that saving babies is "right", whenever you write, "saving babies is the right thing to do", I translate this as, "X is the right thing to do" where X is something that is right, whatever that might mean. I leave that as a variable to see if it gets answered later.

Then you write, "What makes saving the baby the right thing to do is a logical fact about the subject matter of rightness - in this case, a pretty fast and primitive implication from the premises that are baked into that subject matter and which distinguish it from the subject matter of wrongness."

How is wrongness or rightness baked into a subject matter?

Comment author: ciphergoth 31 January 2010 11:24:23PM 3 points [-]

Right, so a moral relativist is a kind of moral absolutist who believes that the One True Moral Rule is that you must do what is the collective moral will of the species you're part of.

Comment author: Eliezer_Yudkowsky 31 January 2010 11:39:49PM 5 points [-]

Yup, and so long as I'm going to be a moral absolutist anyway, why be that sort of moral absolutist?

Comment deleted 31 January 2010 09:38:17PM *  [-]
Comment author: timtyler 05 February 2010 07:29:48PM *  1 point [-]

Eliezer uses the word "should" in what seems to me to be a weird and highly counter-intuitive way.

Multiple people have advised him about this - but he seems to like his usage.

Comment author: Eliezer_Yudkowsky 31 January 2010 11:02:26PM 1 point [-]

I truly and honestly say to you, Roko, that while you got most of my points, maybe even 75% of my points, there seems to be a remaining point that is genuinely completely lost on you. And a number of other people. It is a difficult point. People here are making fun of my attempt to explain it using an analogy to Lob's Theorem, as if that was the sort of thing I did on a whim, or because of being stupid. But... my dear audience... really, by this point, you ought to be giving me the benefit of the doubt about that sort of thing.

Also, it appears from the comment posted below and earlier that this mysterious missed point is accessible to, for example, Nick Tarleton.

It looks to me like the opposing position is not based on disagreement with this point but rather outright failure to understand what is being said.

Comment author: SilasBarta 31 January 2010 11:21:49PM *  13 points [-]

Well, you did make a claim about what is the right translation when speaking to babyeaters:

we and they are talking about a different subject matter and it is an error of the computer translation programs that the word comes out as "morality" in both cases. Morality is about how to save babies, not eat them, everyone knows that and they happen to be right. If we could get past difficulties of the translation, the babyeaters would agree with us about what is moral, we would agree with them about what is babyeating

But there has to be some standard by which you prefer the explanation "we mistranslated the term 'morality'" to "we disagree about morality", right? What is that? Presumably, one could make your argument about any two languages, not just ones with a species gap:

"We and Spaniards are talking about a different subject matter and it is an error of the computer translation programs that the word comes out as "morality" in both cases. Morality is about how to protect freedoms, not restrict them, everyone knows that and they happen to be right. If we could get past difficulties of the translation, the Spaniards would agree with us about what is moral, we would agree with them about what is familydutyhonoring."

ETA: A lot of positive response to this, but let me add that I think a better term in the last place would be something like "morality-to-Spaniards". The intuition behind the original phrasing was to show how you can redefine Spanish standards of morality to be "not-morality", but rather, just "things that we place different priority on".

But it's clearly absurd there: the correct translation of ética is not "ethics-to-Spaniards", but rather, just plain old "ethics". And the same reasoning should apply to the babyeather case.

Comment author: gregconen 02 February 2010 01:38:02PM *  4 points [-]

To go a step further, moral disagreement doesn't require a language barrier at all.

"We and abolitionists are talking about a different subject matter and it is an error of the "computer translation programs" that the word comes out as "morality" in both cases. Morality is about how to create a proper relationship between races, everyone knows that and they happen to be right. If we could get past difficulties of the "translation", the abolitionists would agree with us about what is moral, we would agree with them about what is abolitionism."

Comment author: blacktrance 29 May 2014 04:26:46PM *  0 points [-]

As I understand it, relativism doesn't mean "refers explicitly to particular agents". Suppose there's a morality-determining function that takes an agent's terminal values and their psychology/physiology and spits out what that agent should do. It would spit different things out for different agents, and even more different things for different kinds of agents (humans vs babyeaters). Nevertheless, this would not quite be moral relativism because it would still be the case that there's an objective morality-determining function that is to be applied to determine what one should do. Moral relativism would not merely say that there's no one right way one should act, it would also say that there's no one right way to determine how one should act.

Comment author: TheAncientGeek 29 May 2014 04:30:00PM *  0 points [-]

It's not objective, because it's results differ with differing terminal values. An objective morality machine would tell you what you should do, not tell you how to satisfy your values. Iow, morality isn't decision theory.

Comment author: blacktrance 29 May 2014 04:42:01PM 0 points [-]

An objective morality machine would tell you what you should do, not tell you how to satisfy your values

Why must the two be mutually exclusive? Why can't morality be about satisfying your values? One could say that morality properly understood is nothing more than the output of decision theory, or that outputs of decision theory that fall in a certain area labeled "moral questions" are morality.

Comment author: komponisto 29 May 2014 07:24:30PM 1 point [-]

Why can't morality be about satisfying your values?

Because that isn't how the term "morality" is typically used by humans. The "morality police" found in certain Islamic countries aren't life coaches. The Ten Commandments aren't conditional statements. When people complain about the decaying moral fabric of society, they're not talking about a decline in introspective ability.

Inherent to the concept of morality is the external imposition of values. (Not just decisions, because they also want you to obey the rules when they're not looking, you see?) Sociologically speaking, morality is a system for getting people to do unfun things by threatening ostracization.

Decision theory (and meta-decision-theory etc.) does not exist to analyze this concept (which is not designed for agents); it exists to replace it.

Comment author: bogus 31 May 2014 02:59:32PM *  0 points [-]

Because that isn't how the term "morality" is typically used by humans. The "morality police" found in certain Islamic countries aren't life coaches. The Ten Commandments aren't conditional statements. ... Inherent to the concept of morality is the external imposition of values.

Morality is about all of these things. and more besides. Although "outer" morality as embodied in moral codes and moral exemplars is definitely important, if there were no inner values for humans to care about in the first place, no one would be going around and imposing them on others, or even debating them in any way.

And it is a fact about the world that most basic moral values are shared among human societies. Morality may or may not be objective, but it is definitely intersubjective in a way that looks 'objective' to the casual observer.

Comment author: blacktrance 29 May 2014 08:39:47PM *  0 points [-]

"Morality" is used by humans in unclear ways and I don't know how much can be gained from looking at common usage. It's more sensible to look at philosophical ethical theories rather than folk morality - and there you'll find that moral internalism and ethical egoism are within the realm of possible moralities.

Comment author: TheAncientGeek 29 May 2014 08:34:18PM *  0 points [-]

Morality done right is about the voluntary and mutual adjustment of values ( or rather actions expressing them).

Morally done wrong can go two ways, one failure mode is hedonism, where the individual takes no notice of the preferences of others:; the other is authoritarianism, where "society" (rather, its representatives) imposes values that no-one likes or has a say in.

Comment author: TheAncientGeek 29 May 2014 04:53:09PM *  -1 points [-]

Note the word objective.

Comment author: blacktrance 29 May 2014 05:05:21PM *  0 points [-]

An objective morality machine would tell you the One True Objective Thing TheAncientGeek Should Do, given your values, but this thing need not be the same as The One True Objective Thing Blacktrance Should Do. The calculations it performs are the same in both cases (which is what makes it objective), but the outputs are different.

Comment author: TheAncientGeek 29 May 2014 06:22:11PM 0 points [-]

You are misusing "objective". How does your usage differ from telling me what i should do subjectively? How can.true-for-me-but-not-for-you clauses fail to indicate subjectivity? How cam it be coherent to say there is one truth, only it is different for everybody?

Comment author: Vaniver 29 May 2014 06:44:56PM 3 points [-]

A person's height is objectively measurable; that does not mean all people have the same height.

Comment author: TheAncientGeek 29 May 2014 07:01:33PM 0 points [-]

"True about person P" is objective.

"True for person P about X" is subjective.

Subjectivity is multiple truths about one thing, ie multiple claims about one thing, which are indexed to individuals, and which would be contradictory without the indexing.

Comment author: blacktrance 29 May 2014 06:44:43PM 0 points [-]

Saying it's true-for-me-but-not-for-you conflates two very different things: truth being agent-relative and descriptive statements about agents being true or false depending on the agent they're referring to. "X is 6 feet tall" is true when X is someone who's 6 feet tall and false when X is someone who's 4 feet tall, and in neither case is it subjective, even though the truth-value depends on who X is. Morality is similar - "X is the right thing for TheAncientGeek to do" is an objectively true (or false) statement, regardless of who's evaluating you. Encountering "X is the right thing to do if you're Person A and the wrong thing to do if you're Person B" and thinking moralitry subjective is the same sort of mistake as if you encountered the statement "Person A is 6 feet tall and Person B is not 6 feet tall" and concluded that height is subjective.

Comment author: TheAncientGeek 29 May 2014 07:12:13PM 0 points [-]

See my other reply.

Indexing statements about individuals to individuals is harmless. Subjectivity comes in when you index statements about something else to individuals.

Morally relevant actions are actions which potentially affect others

Your morality machine is subjective because I don't need to feed in anyone else's preferences, even though my actions will affect them.

Comment author: komponisto 29 May 2014 07:10:41PM *  0 points [-]

Morality is similar - "X is the right thing for TheAncientGeek to do" is an objectively true (or false) statement, regardless of who's evaluating you.

Not so! Rather, "X is the right thing for TheAncientGeek to do given TheAncientGeek's values" is an objectively true (or false) statement. But "X is the right thing for TheAncientGeek to do" tout court is not; it depends on a specific value system being implicitly understood.

Comment author: Vladimir_Nesov 01 February 2010 09:02:31AM *  0 points [-]

What makes the theory relativist is simply the fact that it refers explicitly to particular agents -- humans.

Unfortunately, it's not that easy. An agent, given by itself, doesn't determine preference. It probably does so to a large extent, but not entirely. There is no subject matter of "preference" in general. "Human preference" is already a specific question that someone has to state, that doesn't magically appear from a given "human". A "human" might only help (I hope) to pinpoint the question precisely, if you start in the general ballpark of what you'd want to ask.

I suspect that "Vague statement of human preference"+"human" is enough to get a question of "human preference", and the method of using the agent's algorithm is general enough for e.g. "Vague statement of human preference"+"babyeater" to get a precise question of "babyeater preference", but it's not a given, and isn't even expected to "work" for more alien agents, who are compelled by completely different kinds of questions (not that you'd have a way of recognizing such "error").

The reference to humans or babyeaters is in the method of constructing a preference-implementing machine, not in the concept itself. What humans are is not the info that compels you to define human preference in a particular way, although what humans are may be used as a tool in the definition of human preference, simply because you can pull the right levers and point to the chunks of info that go into the definition you choose.

[W]hy should we do what we prefer rather than what they prefer? The correct answer is, of course, "because that's what we prefer"

That's not a justification. They may turn out to do something right, where you were mistaken, and you'll be compelled to correct.

Comment author: komponisto 01 February 2010 11:17:00AM 0 points [-]

The reference to humans or babyeaters is in the method of constructing a preference-implementing machine, not in the concept itself.

Yes.

Comment author: Unknowns 01 February 2010 04:17:42AM *  -1 points [-]

As it is commonly understood, Eliezer is definitely NOT a moral relativist.

Comment author: komponisto 01 February 2010 04:22:14AM *  2 points [-]

(Downvoted for denying my claim without addressing my argument. That's very annoying.)

Comment author: MichaelBishop 03 February 2010 05:38:42PM *  0 points [-]

re: denying claim without addressing argument IMO, such comments are acceptable when the commenter is of high enough status in the community. Obviously I'd prefer they address the argument, but I consider myself better off just knowing that certain people agree or disagree.

ADDED: Note, I am merely stating my personal preference, not insisting that my personal preference become normatively binding on LW. I also happen to agree with Komponisto's judgment that Unknowns previous comment was unhelpful.

Comment author: komponisto 03 February 2010 05:50:36PM *  4 points [-]

I disagree.

ETA: Note that an implication of what you said is that replying in that manner constitutes an assertion of higher status than the other person; this is exactly why it is irritating.

Comment author: MichaelBishop 03 February 2010 06:31:13PM -1 points [-]

I think assertions of higher status can sometimes be characterized as justifiable or even desirable. Eliezer does this all the time. The alternative to "stating disagreement while failing to address the details of the argument," is often to ignore the comment altogether. (Also, see edit to my previous comment before replying further.)

Comment author: komponisto 03 February 2010 06:37:57PM 0 points [-]

Well, if you agree with me about that particular comment, maybe it would have been preferable to wait for an occasion where you actually disagreed with my judgment to make this point?

(This would help cut down on "fake disagreements", i.e. disagreements arising out of misunderstanding.)

Comment author: MichaelBishop 03 February 2010 06:49:53PM 1 point [-]

Agreed.

Comment author: MrHen 03 February 2010 06:06:27PM 0 points [-]

I think the manner in which komponisto was calling Eliezer a moral relativist deserves a more thorough answer. If I make an off-handed remark and someone disagrees with me, I find an off-handed remark fair. If I spend three paragraphs and get, "No," as a response I will be annoyed.

In this case, I side with komponisto.

Comment author: TheAncientGeek 29 May 2014 03:12:29PM *  0 points [-]

Not ndividual level relativism, or not group level relativism?

Comment author: Kevin 01 February 2010 04:56:11AM *  -1 points [-]

As I understand the common understanding, moral relativist commonly means not believing in absolute morality, which I think is pretty much all of us.

Comment deleted 31 January 2010 09:07:13PM *  [-]
Comment author: ciphergoth 31 January 2010 11:08:59PM *  1 point [-]

No, as far as I can tell he's using "moral" to refer to CEV.

Which I think underestimates how parochial people are when they typically use the word "moral".

Comment deleted 01 February 2010 11:42:34AM *  [-]
Comment author: wnoise 01 February 2010 06:22:50PM 0 points [-]

I thought CEV was not what defined "right"-eliezer, but a useful heuristic for approximating "right"-eliezer, with a built-in hedge that straight majoritarianism doesn't have.

Comment author: ShardPhoenix 01 February 2010 10:14:42AM *  1 point [-]

I'd assume he imagines CEV as being pretty similar to his own particular preferences, though - otherwise, shouldn't he adjust his preferences already?.

The main reason why I don't like they way Eliezer uses terms like "morality" is because it feels like he's trying to redefine "morality" to mean "what I, Eliezer Yudkowsky, personally want", which doesn't make for enlightening discussion.

Comment deleted 31 January 2010 09:04:25PM [-]
Comment author: Eliezer_Yudkowsky 31 January 2010 09:06:59PM 1 point [-]

But I just described two kinds of subject matter that are the only two kinds of subject matter I know about: physical facts and mathematical facts. "What should be done with the universe" invokes a criterion of preference, "should", which compels humans but not Babyeaters. If you look at the fact that the Babyeaters are out trying to make a different sort of universe, and the fact that the humans are out trying to make the universe make the way it should look, and you call these two facts a "disagreement", I don't understand what physical fact or logical fact is supposed to be the common subject matter which is being referred-to. They do the babyeating thing, we do the right thing; that's not a subject matter.

Comment author: ata 01 February 2010 10:47:36AM *  4 points [-]

I haven't finished reading your meta-ethics sequence, so I apologize in advance if this is something that you've already addressed, but just from this exchange, I'm wondering:

Suppose that instead of talking about humans and Babyeaters, we talk about groups of humans with equally strong feelings of morality but opposite ideas about it. Suppose we take one person who feels moral when saving a little girl from being murdered, and another person who feels moral when murdering a little girl as punishment for having being raped. This seems closely analogous to your "Morality is about how to save babies, not eat them, everyone knows that and they happen to be right." It would sound just as reasonable to say that everybody knows that morality is about saving children rather than murdering them, but sadly, it's not the case that "everybody knows" this: as you know, there are cultures existing right now where a girl would be put to death by honestly morally-outraged elders for the abominable sin of being raped, horrifying though this fact is.

So let's take two people (or two larger groups of people, if you prefer) from each of these cultures. We could have them imagine these actions as intensely as possible, and scan their brains for relevant electrical and chemical information, find out what parts of the brain are being used and what kinds of emotions are active. (If a control is needed, we could scan the brain of someone intensely imagining some action everyone would consider irrelevant to morality, such as brushing one's teeth. I don't think there are any cultures that deem that evil, are there?) If the child-rescuer and child-murderer seem to be feeling the same emotions, having the same experience of righteousness, when imagining their opposite acts, would you still conclude that it is a mistranslation/misuse to identify our word "morality" with whatever word the righteous-feeling child-murderer is using for what appears to be the same feeling? Or would you conclude that this is a situation where two people are talking about the same subject matter but have drastically opposing ideas about it?

If the latter is the case, then I do think I get the point of the Babyeater thought experiments: although they appear to us to have some mechanism of making moral judgments (judgments that we find horrible), this mechanism serves different cognitive functions for them than our moral intuition does for us, and it originated in them for different reasons. Therefore, they cannot be reasonably considered to be differently-calibrated versions of the same feature. Is that right?

Comment author: Eliezer_Yudkowsky 01 February 2010 06:14:23PM *  4 points [-]

If the child-rescuer and child-murderer seem to be feeling the same emotions, having the same experience of righteousness, when imagining their opposite acts, would you still conclude that it is a mistranslation/misuse to identify our word "morality" with whatever word the righteous-feeling child-murderer is using for what appears to be the same feeling?

Depends. If the child-murderer knew everything about the true state of affairs and everything about the workings of their own inner mind, would they still disagree with the child-rescuer? If so, then it's pretty futile to pretend that they're talking about the same subject matter when they talk about that-which-makes-me-experience-a-feeling-of-being-justified. It would be like if one species of aliens saw green when contemplating real numbers and another species of aliens saw green when contemplating ordinals; attempts to discuss that-which-makes-me-see-green as if it were the same mathematical subject matter are doomed to chaos. By the way, it looks to me like a strong possibility is that reasonable methods of extrapolating volitions will give you a spread of extrapolated-child-murderers some of which are perfectly selfish hedonists, some of which are child-rescuers, and some of which are Babyeaters.

And yes, this was the approximate point of the Babyeater thought experiment.

Comment author: loqi 01 February 2010 10:01:13AM 3 points [-]

The problem I have with this use of the words "should" and "good" is that it treats the them like semantic primitives, rather than functions of context. We use them in explicitly delimited contexts all the time:

  • "If you want to see why the server crashed, you should check the logs."
  • "You should play Braid, if platformers are your thing."
  • "You should invest in a quality fork, if you plan on eating many babies."
  • "They should glue their pebble heaps together, if they want them to retain their primality."

Since I'm having a hard time parting with the "should" of type "Goal context -> Action on causal path to goal", the only sense I can make out of your position is that "if your goal is [extensional reference to the stuff that compels humans]" is a desirable default context.

If you agree that "What should be done with the universe" is a different question than "What should be done with the universe if we want to maximize entropy as quickly as possible", then either you're agreeing that what we want causally affects should-ness, or you're agreeing that the issue isn't really "should"'s meaning, it's what the goal context should be when not explicitly supplied. And you seem to be saying that it should be an extensional reference to commonplace human morality.

Comment author: Alicorn 01 February 2010 01:09:04AM 5 points [-]

The rampant dismissal of so many restatements of your position has tempted me to try my own. Tell me if I've got it right or not:

There is a topic, which covers such subtopics as those listed here, which is the only thing in fact referred to by the English word "morality" and associated terms like "should" and "right". It is an error to refer to other things, like eating babies, as "moral" in the same way it would be an error to refer to black-and-white Asian-native ursine creatures as "lobsters": people who do it simply aren't talking about morality. Once the subject matter of morality is properly nailed down, and all other facts are known, there's no room for disagreement about morality, what ought to be done, what actions are wrong, etc. any more than there is about the bachelorhood of unmarried men. However, it happens that the vast majority kinds of possible minds don't give a crap about morality, and while they might agree with us about what they should do, they wouldn't find that motivating. Humans, as a matter of a rather lucky causal history, do care about morality, in much the same way that pebblesorters care about primes - it's just one of the things we're built to find worth thinking about and working towards. By a similar token, we are responsive to arguments about features of situations that give them moral character of one sort or another.

Comment author: aausch 01 February 2010 01:42:39AM 1 point [-]

This is the interpretation I also have of Eliezer's view, and it confuses me, as it applies to the story.

For example, I would expect aliens which do not value morality would be significantly more difficult to communicate with.

Also, the back story for the aliens gives a plausible argument for their actions as arising from a different path towards the same ultimate morality.

I interpreted the story as showing aliens which, as a quirk of their history and culture, have significant holes in their morality - holes which, given enough time, I would expect will disappear.

Comment author: orthonormal 01 February 2010 02:48:49AM 2 points [-]

Also, the back story for the aliens gives a plausible argument for their actions as arising from a different path towards the same ultimate morality.

Really? Although babyeater_should coincides with akon_should on the notion of "toleration of reasonable mistakes" and on the Prisoner's Dilemma, it seems clear from the story that these functions wouldn't converge on the topic of "eating babies". (If the Superhappies had their way, both functions would just be replaced by a new "compromise" function, but neither the Babyeaters nor the humans want that, and it appears to be the wrong choice according to both babyeater_should and akon_should.)

Comment author: Eliezer_Yudkowsky 01 February 2010 01:26:08AM *  1 point [-]

...sounds mostly good so far. Except that there's plenty of justification for thinking about morality besides "it's something we happen to think about". They're just... well... there's no other way to put this... perfectly valid, moving, compelling, heartwarming, moral justifications. They're actually better justifications than being compelled by some sort of ineffable transcendent compellingness stuff - if I've got to respond to something, those are just the sort of (logical) facts I'd want to respond to! (I think this may be the part Roko still doesn't get.) Also, the "lucky causal history" isn't luck at all, of course.

It's also quite possible that human beings, from time to time, are talking about different subject matters when they have what looks like a moral disagreement; but this is a rather drastic assumption to make in our current state of ignorance, and I feel that a sort of courtesy should be extended, to the extent of hearing out each other's arguments and proceeding on the assumption that we actually are disagreeing about something.

Comment author: Alicorn 02 February 2010 06:00:31AM *  6 points [-]

I'm curious about how your idea handles an edge case. (I am merely curious - not to downplay curiosity, but you shouldn't consider it a reason to devote considerable brain-cycles on its own if it'd take considerable brain-cycles to answer, because I think your appropriation of moral terminology is silly and I won't find the answer useful for any specific purpose.)

The edge case: I have invented an alien species called the Zaee (for freeform roleplaying game purposes; it only recently occurred to me that they have bearing on this topic). The Zaee have wings, and can fly starting in early childhood. They consider it "loiyen" (the Zaee word that most nearly translates as "morally wrong") for a child's birth mother to continue raising her offspring (call it a son) once he is ready to take off for the first time; they deal with this by having her entrust her son to a friend, or a friend of the father, or, in an emergency, somebody who's in a similar bind and can just swap children with her. Someone who has a child without a plan for how to foster him out at the proper time (even if it's "find a stranger to swap with") is seen as being just as irresponsible as a human mother who had a child without a clue how she planned to feed him would be (even if it's "rely on government assistance").

There is no particular reason why a Zaee child raised to adulthood by his biological mother could not wind up within the Zaee-normal range of psychology (not that they'd ever let this be tested experimentally); however, they'd find this statement about as compelling as the fact that there's no reason a human child, kidnapped as a two-year-old from his natural parents and adopted by a duped but competent couple overseas, couldn't grow up to be a normal human: it still seems a dreadful thing to do, and to the child, not just to the parents.

When Zaee interact with humans they readily concede that this precept of their <moral system> has no bearing on any human action whatever: human children cannot fly. And in the majority of other respects, Zaee are like humans in their <morality> - if you plopped a baby Zaee brain in a baby human body (and resolved the body dysphoria and aging rate issues) and he grew up on Earth, he'd be darned quirky, but wouldn't be diagnosed with a mental illness or anything.

Other possibly relevant information: when Zaee programmers program AIs (not the recursively self-improving kind; much more standard-issue sci-fi types), they apply the same principle, and don't "keep" the AIs in their own employ past a certain point. (A particular tradition of programming frequently has its graduates arrange beforehand to swap their AIs.) The AIs normally don't run on mobile hardware, which is irrelevant anyway, because the point in question for them isn't flight. However, Zaee are not particularly offended by the practice of human programmers keeping their own AIs indefinitely. The Zaee would be very upset if humans genetically engineered themselves to have wings from birth which became usable before adulthood and this didn't yield a change in human fostering habits. (I have yet to have cause to get a Zaee interacting with another alien species that can also fly in the game for which they were designed, but anticipate that if I did so, "grimly distasteful bare-tolerance" would be the most appropriate attitude for the Zaee in the interaction. They're not very violent.)

And the question: Are the Zaee "interested in morality"? Are we interested in <Zaee word that most nearly translates as "morality">? Do the two referents mean distinct concepts that just happen to overlap some or be compatible in a special way? How do you talk about this situation, using the words you have appropriated?

Comment author: Unknowns 01 February 2010 07:28:36AM 7 points [-]

Eliezer, I don't understand how you can say that the "lucky causal history" wasn't luck, unless you also say "if humans had evolved to eat babies, babyeating would have been right."

If it wouldn't have been right even in that event, then it took a stupendous amount of luck for us to evolve in just such a way that we care about things that are right, instead of other things.

Either that or there is a shadowy figure.

Comment author: aleksiL 01 February 2010 04:43:14PM 2 points [-]

As I understand Eliezer's position, when babyeater-humans say "right", they actually mean babyeating. They'd need a word like "babysaving" to refer to what's right.

Morality is what we call the output of a particular algorithm instantiated in human brains. If we instantiated a different algorithm, we'd have a word for its output instead.

I think Eliezer sees translating babyeater word for babyeating as "right" as an error similar to translating their word for babyeaters as "human".

Comment author: Unknowns 01 February 2010 05:04:36PM 3 points [-]

Precisely. So it was luck that we instantiate this algorithm, instead of a different one.

Comment author: LauraABJ 01 February 2010 03:32:52AM 6 points [-]

Ah, so moral justifications are better justifications because they feel good to think about. Ah, happy children playing... Ah, lovers reuniting... Ah, the Magababga's chief warrior being roasted as dinner by our chief warrior who slew him nobly in combat...

I really don't see why we should expect 'morality' to extrapolate to the same mathematical axioms if we applied CEV to different subsets of the population. Sure, you can just define the word morality to include the sum total of all human brains/minds/wills/opinions, but that wouldn't change the fact that these people, given their druthers and their own algorithms would morally disagree. Evolutionary psychology is a very fine just-so story for many things that people do, but people's, dare I say, aesthetic sense of right and wrong is largely driven by culture and circumstance. What would you say if omega looked at the people of earth and said, "Yes, there is enough agreement on what 'morality' is that we need only define 80,000 separate logically consistent moral algorithms to cover everybody!"

Comment author: Zack_M_Davis 01 February 2010 03:59:33AM 5 points [-]

this is a rather drastic assumption to make in our current state of ignorance, and I feel that a sort of courtesy should be extended

Yes, but do you see why people get annoyed when you build that courtesy into your terminology?

Comment author: Alicorn 01 February 2010 02:28:12AM 3 points [-]

They're actually better justifications

"Better" by the moral standard of betterness, or by a standard unconnected to morality itself?

if I've got to respond to something, those are just the sort of (logical) facts I'd want to respond to!

Want to respond to because you happen to be the sort of creature that likes and is interested in these facts, or for some reason external to morality and your interest therein?

It's also quite possible that human beings, from time to time, are talking about different subject matters when they have what looks like a moral disagreement; but this is a rather drastic assumption to make in our current state of ignorance

Why does this seem like a "drastic" assumption, even given your definition of "morality"?

Comment author: Eliezer_Yudkowsky 01 February 2010 02:31:53AM *  0 points [-]

I don't see why I'd want to use an immoral standard. I don't see why I ought to care about a standard unconnected to morality. And yes, I'm compelled by the sort of logical facts we name "moral justifications" physically-because I'm the sort of physical creature I am.

It's drastic because it closes down the possibility of further discourse.

Comment author: Alicorn 01 February 2010 02:32:59AM 7 points [-]

Is there some way in which this is not all fantastically circular?

Comment author: Psy-Kosh 01 February 2010 03:31:30AM 11 points [-]

How about something like this: There's a certain set of semi abstract criteria that we call 'morality'. And we happen to be the sorts of beings that (for various reasons) happen to care about this morality stuff as opposed to caring about something else. should we care about morality? Well, what is meant by "should"? It sure seems like that's a term that we use to simply point to the same morality criteria/computation. In other words, "should we care about morality" seems to translate to "is it moral to care about morality" or "apply morality function to 'care about morality' and check the output"

It would seem also that the answer is yes, it is moral to care about morality.

Some other creatures might somewhere care about something other than morality. That's not a disagreement about any facts or theory or anything, it's simply that we care about morality and they may care about something like "maximize paperclip production" or whatever.

But, of course, morality is better than paper-clip-ality. (And, of course, when we say "better", we mean "in terms of those criteria we care about"... ie, morality again.)

It's not quite circular. Us and the paperclipper creatures wouldn't really disagree about anything. They'd say "turning all the matter in the solar system into paperclips is paperclipish", and we'd agree. We'd say "it's more moral not to do so", and they'd agree.

The catch is that they don't give a dingdong about morality, and we don't give a dingdong about paperclipishness. And indeed that does make us better. And if they scanned our minds to see what we mean by "better", they'd agree. But then, that criteria that we were referring to by the term "better" is simply not something the paperclippers care about.

"we happen to care about it" is not the justification. It's moral is the justification. It's just that our criteria for valid moral justification is, well... morality. Which is as it should be. etc etc.

Morality is seems to be an objective criteria. Actions can be judged good or bad in terms of morality. We simply happen to care about morality instead of something else. And this is indeed a good thing.

Comment author: RomanDavis 24 May 2010 04:22:55PM 2 points [-]

Oh shit. I get it. Morality exists outside of ourselves in the same way that paperclips exists outside clippies.

Babyeating is justified by some of the same impulses as baby saving: protecting ones own genetic line.

It's not necessarily as well motivated by the criteria of saving sentient creatures from pain, but you might be able to make an argument for it. Maybe if you took thhe opposite path and said not that pain was bad, but that sentience / long life/ grandchildren was good and baby eating was a "moral decision" for having grand children.

Comment author: byrnema 01 February 2010 04:02:53AM *  9 points [-]

I don't understand two sentences in a row. Not here, not in the meta-ethics sequence, not anywhere where you guys talk about morality.

I don't understand why I seem to be cognitively fine on other topics on Less Wrong, but then all of a sudden am Flowers for Algernon here.

I'm not going to comment anymore on this topic; it just so happens meta-morality or meta-ethics isn't something I worry about anyway. But I would like to part with the admonition that I don't see any reason why LW should be separating so many words from their original meanings -- "good", "better", "should", etc. It doesn't seem to be clarifying things even for you guys.

I think that when something is understood -- really understood -- you can write it down in words. If you can't describe an understanding, you don't own it.

Comment author: Alicorn 01 February 2010 03:36:41AM 4 points [-]

It looks circular to me. Of course, if you look hard enough at any views like this, the only choices are circles and terminating lines, and it seems almost an aesthetic matter which someone goes with, but this is such a small circle. It's right to care about morality and to be moral because morality says so and morality possesses the sole capacity to identify "rightness", including the rightness of caring about morality.

Comment author: Eliezer_Yudkowsky 01 February 2010 03:35:12AM 3 points [-]

Only in the sense that "2 + 2 = 4" is not fantastically circular.

Comment author: prase 03 February 2010 01:04:31PM *  0 points [-]

In some sense, the analogy between morality and arithmetics is right. On the other hand, the meaning of arithmetics can be described enough precisely, so that everybody means the same thing by using that word. Here, I don't know exactly what you mean by morality. Yes, saving babies, not comitting murder and all that stuff, but when it comes to details, I am pretty sure that you will often find yourself disagreeing with others about what is moral. Of course, in your language, any such disagreement means that somebody is wrong about the fact. What I am uncomfortable with is the lack of unambiguous definition.

So, there is a computation named "morality", but nobody knows what it exactly is, and nobody gives methods how to discover new details of the yet incomplete definition. Fair, but I don't see any compelling argument why to attach words to only partly defined objects, or why to care too much about them. Seems to me that this approach pictures morality as an ineffable stuff, although of different kind than the standard bad philosophy does.

Comment author: Rain 09 February 2010 08:44:39PM *  0 points [-]

It seems you've encountered a curiosity-stopper, and are no longer willing to consider changes to your thoughts on morality, since that would be immoral. Is this the case?

Comment author: Eliezer_Yudkowsky 10 February 2010 12:53:33AM 2 points [-]

Wha? No. But you'd have to offer me a moral reason, as opposed to an immoral one.

Comment author: Alicorn 10 February 2010 01:00:52AM 3 points [-]

How about amoral reasons? Are those okay?

Comment author: byrnema 01 February 2010 01:43:09AM *  0 points [-]

However, it happens that the vast majority kinds of possible minds don't give a crap about morality, and while they might agree with us about what they should do, they wouldn't find that motivating.

What about the minds that disagree with us about what they should do, and yet do care about doing what they think they should? Would your position hold that it is unlikely for them to have a different list or that they must be mistaken about the list -- that caring about what you "should" do means having the list we have?

Comment author: Eliezer_Yudkowsky 01 February 2010 01:48:21AM *  1 point [-]

What about the minds that disagree with us about what they should do, and yet do care about doing what they think they should?

How'd they end up with the same premises and different conclusions? Broken reasoning about implications, like the human practice of rationalization? Bad empirical pictures of the physical universe leading to poor policy? If so, that all sounds like a perfectly ordinary situation.

Comment author: byrnema 01 February 2010 02:03:59AM *  0 points [-]

How'd they end up with the same premises and different conclusions?

They care about doing what is morally right, but they have different values. The baby-eaters, for example, thought it was morally right to optimize whatever they were optimizing with eating the babies, but didn't particularly value their babies' well-being.

Comment author: orthonormal 01 February 2010 02:40:53AM *  4 points [-]

Er, you might have missed the ancestor of this thread. In the conflict between fundamentally different systems of preference and value (more different than those of any two humans), it's probably more confusing than helpful to use the word "should" with the other one. Thus we might introduce another word, should2, which stands in relation to the aliens' mental constitution (etc) as should stands to ours.

This distinction is very helpful, because we might (for example) conclude from our moral reasoning that we should respect their moral values, and then be surprised that they don't reciprocate, if we don't realize that that aspect of should needn't have any counterpart in should2. If you use the same word, you might waste time trying to argue that the aliens should do this or respect that, applying the kind of moral reasoning that is valid in extrapolating should; when they don't give a crap for what they should do, they're working out what they should2 do.

(This is more or less the same argument as in Moral Error and Moral Disagreement, I think.)

Comment author: byrnema 01 February 2010 02:52:07AM 3 points [-]

I'm not sure. How can there be any confusion when I say they "do care about doing what they think they should?" I clearly mean should2 here.

Comment author: TheAncientGeek 29 May 2014 03:35:29PM -1 points [-]

There remains a third option in addition to evolutionary hardwired stuff and ineffable, transcendent stuff.

Comment author: Wei_Dai 31 January 2010 09:25:33PM *  3 points [-]

But I just described two kinds of subject matter that are the only two kinds of subject matter I know about: physical facts and mathematical facts.

Suppose I ask

  • What is rationality?
  • Is UDT the right decision theory?
  • What is the right philosophy of mathematics?

Am I asking about physical facts or logical/mathematical facts? It seems like I'm asking about a third category of "philosophical facts".

We could say that the answer to "what is rationality" is whatever my meta-rationality computes, and hence reduce it to a physical+logical fact, but that really doesn't seem to help at all.

Comment author: Eliezer_Yudkowsky 31 January 2010 10:49:02PM 1 point [-]

These all sound to me like logical questions where you don't have conscious access to the premises you're using, and can only try to figure out the premises by looking at what seem like good or bad conclusions. But with respect to the general question of whether we are talking about (a) the way events are or (b) which conclusions follow from which premises, it sounds like we're doing the latter. Other "philosophical" questions (like 'What's up with the Born probabilities?' or 'How should I compute anthropic probabilities?') may actually be about (a).

Comment author: Wei_Dai 01 February 2010 09:24:46AM *  3 points [-]

Your answer seemed wrong to me, but it took me a long time to verbalize why. In the end, I think it's a map/territory confusion.

For comparison, suppose I'm trying to find the shortest way from home to work by visualizing a map of the city. I'm doing a computation in my mind, which can also be viewed as deriving implications from a set of premises. But that computation is about something external; and the answer isn't just a logical fact about what conclusions follow from certain premises.

When I ask myself "what is rationality?" I think the computation I'm doing in my head is also about something external to me, and it's not just a logical question where I don't have conscious access to the premises that I'm using, even though that's also the case.

So my definition of moral realism would be that when I do the meta-moral computation of asking "what moral premises should I accept?", that computation is about something that is not just inside my head. I think this is closer to what most people mean by the phrase.

Given the above, I think your meta-ethics is basically a denial of moral realism, but in such a way that it causes more confusion than clarity. Your position, if translated into the "shortest way to work" example, would be if someone told you that there is no fact of the matter about the shortest way to work because the whole city is just a figment of your imagination, and you reply that there is a fact of the matter about the computation in your mind, and that's good enough for you to call yourself a realist.

Comment author: Eliezer_Yudkowsky 01 February 2010 09:47:22AM 2 points [-]

When I ask myself "what is rationality?" I think the computation I'm doing in my head is also about something external to me

Well, if you're asking about human rationality, then the prudent-way-to-think involves lots of empirical info about the actual flaws in human cognition, and so on. If you're asking about rationality in the sense of probability theory, then the only reference to the actual that I can discern is about anthropics and possibly prudent priors - things like the Dutch Book Argument are math, which we find compelling because of our values.

If you think that we're referring to something else - what is it, where is it stored? Is there a stone tablet somewhere on which these things are written, on which I can scrawl graffiti to alter the very fabric of rationality? Probably not - so where are the facts that the discourse is about, in your view?

Comment author: Wei_Dai 01 February 2010 10:32:11AM 0 points [-]

I think "what is rationality" (and by that I mean ideal rationality) is like "does P=NP". There is some fact of the matter about it that is independent of what premises we choose to, or happen to, accept. I wish I knew where these facts live, or exactly how it is that we have any ability to determine them, but I don't. Fortunately, I don't think that really weakens my argument much.

Comment author: Eliezer_Yudkowsky 01 February 2010 10:42:58AM 4 points [-]

This is exactly what I refer to as a "logical fact" or "which conclusions follow from which premises". Wasn't that clear?

Actually, I guess it could be a bit less clear if you're not already used to thinking of all math as being about theorems derived from axioms which are premise-conclusion links, i.e., if the axioms are true of a model then the theorem is true of that model. Which is, I think, conventional in mathematics, but I suppose it could be less obvious.

In the case of P!=NP, you'll still need some axioms to prove it, and the axioms will identify the subject matter - they will let you talk about computations and running time, just as the Peano axioms identify the subject matter of the integers. It's not that you can make 2 + 2 = 5 by believing differently about the same subject matter, but that different axioms would cause you to be talking about a different subject matter than what we name the "integers".

Is this starting to sound a little familiar?

Comment author: Wei_Dai 01 February 2010 12:22:10PM *  1 point [-]

Actually, I guess it could be a bit less clear if you're not already used to thinking of all math as being about theorems derived from axioms which are premise-conclusion links

But that's not all that math is. Suppose we eventually prove that P!=NP. How did we pick the axioms that we used to prove it? (And suppose we pick the wrong axioms. Would that change the fact that P!=NP?) Why are we pretty sure today that P!=NP without having a chain of premise-conclusion links? These are all parts of math; they're just parts of math that we don't understand.

ETA: To put it another way, if you ask someone who is working on the P!=NP question what he's doing, he is not going to answer that he is trying to determine whether a specific set of axioms proves or disproves P!=NP. He's going to answer that he's trying to determine whether P!=NP. If those axioms don't work out, he'll just pick another set. There is a sense that the problem is about something that is not identified by any specific set of axioms that he happens to hold in his brain, that any set of axioms he does pick is just a map to a territory that's "out there". But according to your meta-ethics, there is no "out there" for morality. So why does it deserve to be called realism?

Perhaps more to the point, do you agree that there is a coherent meta-ethical position that does deserve to be called moral realism, which asserts that moral and meta-moral computations are about something outside of individual humans or humanity as a whole (even if we're not sure how that works)?

Comment deleted 31 January 2010 09:35:36PM *  [-]
Comment author: Eliezer_Yudkowsky 31 January 2010 10:46:53PM 1 point [-]

Mm... I can agree that a treaty has subject matter and is talked about by both parties, and refers to subsequent physical events. It has a treaty-kept-condition which is not quite the same thing as its being "true". (Note: in the original story, no treaty was actually discussed with the Babyeaters.) Where does that put it on a fact/opinion chart?

Comment author: TheAncientGeek 29 May 2014 03:22:45PM 0 points [-]

It looks like you can disagree about values as well as facts.

Comment author: timtyler 31 January 2010 09:09:07PM 0 points [-]

Are there any "Strong Moral Realists"?

Comment author: Clippy 31 January 2010 11:30:40PM *  9 points [-]

Upon reading the definition, I count as a strong moral realist. (Most of you people here just need more convincing about what state the universe should be transformed to.)

Comment author: gregconen 31 January 2010 11:09:28PM 1 point [-]

Many theists would take that position. At least if they admitted the possibility of sentient AIs and aliens.

Comment author: Unknowns 01 February 2010 07:22:14AM 0 points [-]

I would say there's at least a 5% chance of that theory being true.

Comment author: Vladimir_Nesov 31 January 2010 08:48:13PM 0 points [-]

Now that was confusing...

Comment author: ciphergoth 31 January 2010 11:05:30PM 0 points [-]

I'm very surprised to hear you say that - the subject matter seems clearer to me now than it was before.

Comment author: [deleted] 02 February 2010 06:17:44PM -2 points [-]

If morality is encapsulated by a formal system, by Godel's second theorem there will exist statements--moral statements--which are simultaneously true and not true. Can such a system reject either moral relativism or moral absolutism without contradicting itself?

Comment author: ata 06 February 2010 03:00:05AM *  0 points [-]

If a formal system has a single statement that is simultaneously true and not true, then you can prove any statement (and its opposite) in that system, and it is therefore useless. This was known before Gödel. His insight was that in a system that is not inconsistent (and that is complex enough to represent arithmetic), there will be some situations where given some proposition x, you can neither prove x nor ~x. That's not "simultaneously true and not true" ("true" is in the territory, a formal system is the map), it just means the truth value is unknowable within the system.

In any case, I think this is fairly irrelevant to moral philosophy, because Gödel's theorems are about formal systems representing number theory. I suppose you could somehow represent empirical statements (including moral statements, if we agreed on exactly what facts about reality they signify) in that form — take a structure representing the entire universe as an axiom, and deduce theorems from there — but that's rather impractical for obvious reasons, and there's nothing that really suggests that this provides any analogous insights about simpler and more possible modes of reasoning. In fact, you could change your statement to talk about any area of knowledge (say, "If science is encapsulated by a formal system..." "If aesthetics is encapsulated...") and it would make just as much sense (or just as litte, rather).

Comment author: TheAncientGeek 29 May 2014 03:49:25PM -1 points [-]

Should has many meanings. Which moral system I believe in is meta level, not object level and probably implies an epistemic-should or rational-should rather than moral-should.

Likewise, not all normative judgement is morality. What you should do to maximise personal pleasure, .lor make money, or "win" in some way , is generally not what you morally-should.

Comment author: TheAncientGeek 29 May 2014 04:11:55PM -1 points [-]

Metaethical realism/objectivism makes the prediction that under some conditions, ants will converge on ethical beliefs. The .TP by [deleted] seems to be arguing that realism doesn't have any object level consequences. Which is half true. Absent a method of arriving at object level truth, it doesn't. With one, it does.