komponisto comments on Strong moral realism, meta-ethics and pseudo-questions. - Less Wrong

18 [deleted] 31 January 2010 08:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (172)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 31 January 2010 08:31:18PM 5 points [-]

I think there's an ambiguity between "realism" in the sense of "these statements I'm making about 'what's right' are answers to a well-formed question and have a truth value" and "the subject matter of moral discourse is a transcendent ineffable stuff floating out there which compels all agents to obey and which could make murder right by having a different state". Thinking that moral statements have a truth value is cognitivism, which sounds much less ambiguous to me, and that's why I prefer to talk about moral cognitivism rather than moral realism.

As a moral cognitivist, I would look at your diagram and disagree that the Baby-Eating Aliens and humans have different views of the same subject matter, rather, we and they are talking about a different subject matter and it is an error of the computer translation programs that the word comes out as "morality" in both cases. Morality is about how to save babies, not eat them, everyone knows that and they happen to be right. If we could get past difficulties of the translation, the babyeaters would agree with us about what is moral, we would agree with them about what is babyeating, and we would agree about the physical fact that we find different sorts of logical facts to be compelling.

I have a pending post-to-write on how, to the best of my knowledge, there are only two sorts of things that can make a proposition "true", namely physical events and logical implications, and of course mixtures of the two. I mention this because we have a legitimate epistemic preference for simpler hypotheses about the causes of physical events, but no such thing as an epistemic preference for "simpler axioms" when we are talking about logical facts. We may have an aesthetic preference for simpler axioms in math, but that is not the same thing. If there's no preference for simpler assumptions, that doesn't mean the issue is not a factual one, but it may suggest that we are dealing with logical facts rather than physical facts (statements which are made true by which conclusions follow from which premises, rather than the state of a causal event).

Added: Since I have a definite criterion for something being a "fact", I defend the notion of fact-ness against the charge of being a floating extra.

Comment author: komponisto 31 January 2010 09:12:59PM 11 points [-]

I think there's an ambiguity between "realism" in the sense of "these statements I'm making are answers to a well-formed question and have a truth value" and "morality is a transcendent ineffable stuff floating out there which compels all agents to obey and could make murder right by having a different state".

Yes -- and the important thing to remember is that the second view, which all of us here agree is silly, is the naive, common-sense human view. It's what people are automatically going to think you're talking about if you go around shouting "Yes Virginia, there are moral facts after all!"

Meanwhile, the general public has a term for the view that you and I share: they call it "moral relativism".

I don't recall exactly, and I haven't yet bothered to look it up, but I believe when you first introduced your metaethics, there were people (myself among them, I think), who objected, not to your actual meta-ethical views, but to the way that you vigorously denied that you were a "relativist"; and you misunderstood them/us as objecting to your theory itself (I think you maybe even threw in an accusation of not comprehending the logical subtleties of Loeb's Theorem).

What makes the theory relativist is simply the fact that it refers explicitly to particular agents -- humans. Thus, it is automatically subject to the "chauvinism" objection with respect to e.g. Babyeaters: we prefer one thing, they prefer another -- why should we do what we prefer rather than what they prefer? The correct answer is, of course, "because that's what we prefer". But people find that answer unpalatable -- and one reason they might is because it would seem to imply that different human cultures should similarly run right over each other if they don't think they share the same values. Now, we may not like the term "relativism", but it seems to me that this "chauvinism" objection is one that you (and I) need to take at least somewhat seriously.

Comment author: Nick_Tarleton 31 January 2010 09:48:51PM *  3 points [-]

Yes -- and the important thing to remember is that the second view, which all of us here agree is silly, is the naive, common-sense human view. It's what people are automatically going to think you're talking about if you go around shouting "Yes Virginia, there are moral facts after all!"

Agreed that this is important. (ETA: I now think Eliezer is right about this.)

Meanwhile, the general public has a term for the view that you and I share: they call it "moral relativism".

We believe (a) that there is no separable essence of goodness, but also (b) that there are moral facts that people can be wrong about. I think the general public understands "moral relativism" to exclude (b), and I don't think there's any short term in common (not philosophical) usage that includes the conjunction of (a) and (b).

What makes the theory relativist is simply the fact that it refers explicitly to particular agents -- humans.

Eliezer doesn't define morality in terms of humans; he defines it (as I understand) in terms of an objective computation that happens to be instantiated by humans. See No License to be Human.

Comment author: komponisto 31 January 2010 10:05:26PM 4 points [-]

We believe (a) that there is no separable essence of goodness, but also (b) that there are moral facts that people can be wrong about. I think the general public understands "moral relativism" to exclude (b)

I think that's uncharitable to the public: surely everyone should admit that people can be mistaken, on occasion, about what they themselves think. A view that holds that nothing that comes out of a person's mouth can ever be wrong is scarcely worth discussing.

Eliezer doesn't define morality in terms of humans; he defines it (as I understand) in terms of an objective computation that happens to be instantiated by humans.

The fact that this computation just so happens to be instantiated by humans and nothing else in the known universe cannot be a coincidence; surely there's a causal relation between humans' instantiating the computation and Eliezer's referring to it.

Comment author: Alicorn 31 January 2010 10:30:16PM 9 points [-]

surely everyone should admit that people can be mistaken, on occasion, about what they themselves think.

This is far from uncontroversial in the general population.

Comment author: Eliezer_Yudkowsky 31 January 2010 11:05:52PM 4 points [-]

surely there's a causal relation between humans' instantiating the computation and Eliezer's referring to it.

Of course there's a causal relation which explains the causal fact of this reference, but this causal explanation is not the same as the moral justification, and it's not appealed to as the moral justification. We shouldn't save babies because-morally it's the human thing to do but because-morally it's the right thing to do. What physically causes us to save the babies is a combination of the logical fact that saving babies is the right thing to do, and the physical fact that we are compelled by those sorts of logical facts. What makes saving the baby the right thing to do is a logical fact about the subject matter of rightness - in this case, a pretty fast and primitive implication from the premises that are baked into that subject matter and which distinguish it from the subject matter of wrongness. The physical fact that humans are compelled by these sorts of logical facts is not one of the facts which makes saving the baby the right thing to do. If I did assert that this physical fact was involved, I would be a moral relativist and I would say the sorts of other things that moral relativists say, like "If we wanted to eat babies, then that would be the right thing to do."

Comment author: Tyrrell_McAllister 31 January 2010 11:36:40PM *  15 points [-]

The physical fact that humans are compelled by these sorts of logical facts is not one of the facts which makes saving the baby the right thing to do. If I did assert that this physical fact was involved, I would be a moral relativist and I would say the sorts of other things that moral relativists say, like "If we wanted to eat babies, then that would be the right thing to do."

The moral relativist who says that doesn't really disagree with you. The moral relativist considers a different property of algorithms to be the one that determines whether an algorithm is a morality, but this is largely a matter of definition.

For the relativist, an algorithm is a morality when it is a logic that compels an agent (in the limit of reflection, etc.). For you, an algorithm is a morality when it is the logic that in fact compels human agents (in the limit of reflection, etc.). That is why your view is a kind of relativism. You just say "morality" where other relativists would say "the morality that humans in fact have".

You also seem more optimistic than most relativists that all non-mutant humans implement very nearly the same compulsive logic. But other relativists admit that this is a real possibility, and they wouldn't take it to mean that they were wrong to be relativists.

If there is an advantage to the relativists' use of "morality", it is that their use doesn't prejudge the question of whether all humans implement the same compulsive logic.

Comment author: Rain 09 February 2010 08:31:12PM *  1 point [-]

I agree with this comment and feel that it offers strong points against Eliezer's way of talking about this issue.

Comment author: byrnema 01 February 2010 12:01:29AM *  4 points [-]

I agree that it seems as though I just don't understand. Sometimes, I feel perched on the edge of understanding, feel a little dizzy, and decide I don't understand.

I don't claim to be representative in any way, but my stumbling block seems to be this idea about how saving babies is right. Since I don't feel strongly that saving babies is "right", whenever you write, "saving babies is the right thing to do", I translate this as, "X is the right thing to do" where X is something that is right, whatever that might mean. I leave that as a variable to see if it gets answered later.

Then you write, "What makes saving the baby the right thing to do is a logical fact about the subject matter of rightness - in this case, a pretty fast and primitive implication from the premises that are baked into that subject matter and which distinguish it from the subject matter of wrongness."

How is wrongness or rightness baked into a subject matter?

Comment author: komponisto 01 February 2010 01:37:58AM *  3 points [-]

Of course there's a causal relation which explains the causal fact of this reference, but this causal explanation is not the same as the moral justification, and it's not appealed to as the moral justification

Of course it isn't, because we're doing meta-ethics here, and don't yet have access to the notion of "moral justification"; we're in the process of deciding which kinds of things will be used as "moral justification".

It's your metamorality that is human-dependent, not your morality; see my other comment.

Comment author: Eliezer_Yudkowsky 01 February 2010 01:44:33AM *  3 points [-]

Now I'm confused. I don't understand how you can have preferences that you use to decide what ought to count as a "moral justification" without already having a moral reference frame.

Since we don't have conscious access to our premises, and we haven't finished reflecting on them, we sometimes go around studying our own conclusions in an effort to discover what counts as a moral justification, but that's not like a philosopher of pure emptiness constructing justificationness from scratch and appeal to some mysterious higher criterion. (Bearing in mind that when someone offers me a higher criterion, it usually ends up looking pretty uninteresting.)

Comment author: komponisto 01 February 2010 03:53:06AM *  6 points [-]

I don't understand how you can have preferences that you use to decide what ought to count as a "moral justification" without already having a moral reference frame.

Well, consider an analogy from mathematical logic: when you write out a formal proof that 2+2 = 4, at some point in the process, you'll end up concatenating two symbols here and two symbols there to produce four symbols; but this doesn't mean you're appealing to the conclusion you're trying to prove in your proof; it just so happens that your ability to produce the proof depends on the truth of the proposition.

Similarly, when an AI with Morality programmed into it computes the correct action, it just follows the Morality algorithm directly, which doesn't necessarily refer explicitly to "humans" as such. But human programmers had to program the Morality algorithm into the AI in the first place; and the reason they did so is because they themselves were running something related to the Morality algorithm in their own brains. That, as you know, doesn't imply that the AI itself is appealing to "human values" in its actual computation (the Morality program need not make such a reference); but it does imply that the meta-ethical theory used by the programmers compelled them to (in an appropriate sense) look at their own brains to decide what to program into the AI.

Comment author: TheAncientGeek 29 May 2014 03:01:09PM *  1 point [-]

That would be epistemic preferences. It's epistemology (and allied fields, like logic and rationality) thatreally runs into circularity problems.

Comment author: ciphergoth 31 January 2010 11:24:23PM 3 points [-]

Right, so a moral relativist is a kind of moral absolutist who believes that the One True Moral Rule is that you must do what is the collective moral will of the species you're part of.

Comment author: Eliezer_Yudkowsky 31 January 2010 11:39:49PM 5 points [-]

Yup, and so long as I'm going to be a moral absolutist anyway, why be that sort of moral absolutist?

Comment author: Eliezer_Yudkowsky 31 January 2010 10:52:38PM 4 points [-]

Yes -- and the important thing to remember is that the second view, which all of us here agree is silly, is the naive, common-sense human view.

No, it's not. The naive, common-sense human view is that sneaking into Jane's tent while she's not there and stealing her water-gourd is "wrong". People don't end up talking about transcendent ineffable stuff until they have pursued bad philosophy for a considerable length of time. And the conclusion - that you can make murder right without changing the murder itself but by changing a sort of ineffable stuff that makes the murder wrong - is one that, once the implications are put baldly, squarely disagrees with naive moralism. It is an attempt to rescue a naive misunderstanding of the subject matter of mind and ontology, at the expense of naive morality.

What makes the theory relativist is simply the fact that it refers explicitly to particular agents -- humans

I agree that this constitutes relativism, and deny that I am a relativist.

why should we do what we prefer rather than what they prefer? The correct answer is, of course, "because that's what we prefer".

See above. The correct answer is "Because children shouldn't die, they should live and be happy and have fun." Note the lack of any reference to humans - this is the sort of logical fact that humans find compelling, but it is not a logical fact about humans. It is a physical fact that I find that logic compelling, but this physical fact is not, itself, the sort of fact that I find compelling.

This is the part of the problem which I find myself unable to explain well to the LessWrongians who self-identify as moral non-realists. It is, admittedly, more subtle than the point about there not being transcendent ineffable stuff, but still, there is a further point and y'all don't seem to be getting it...

Comment author: komponisto 01 February 2010 01:15:37AM *  8 points [-]

I agree that this constitutes relativism, and deny that I am a relativist.

It looks to me like the opposing position is not based on disagreement with this point but rather outright failure to understand what is being said.

I have the same feeling, from the other direction.

I feel like I completely understand the error you're warning against in No License To Be Human; if I'm making a mistake, it's not that one. I totally get that "right", as you use it, is a rigid designator; if you changed humans, that wouldn't change what's right. Fine. The fact remains, however, that "right" is a highly specific, information-theoretically complex computation. You have to look in a specific, narrow region of computation-space to find it. This is what makes you vulnerable to the chauvinism charge; there are lots of other computations that you didn't decide to single out and call "right", and the question is: why not? What makes this one so special? The answer is that you looked at human brains, as they happen to be constituted, and said, "This is a nice thing we've got going here; let's preserve it."

Yes, of course that doesn't constitute a general license to look at the brains of whatever species you happen to be a member of to decide what's "right"; if the Babyeaters or Pebblesorters did this, they'd get the wrong answer. But that doesn't change the fact that there's no way to convince Babyeaters or Pebblesorters to be interested in "rightness" rather than babyeating or primaility. It is this lack of a totally-neutral, agent-independent persuasion route that is responsible for the fundamentally relative nature of morality.

And yes, of course, it's a mistake to expect to find any argument that would convince every mind, or an ideal philosopher of perfect emptiness -- that's why moral realism is a mistake!

Comment author: ciphergoth 31 January 2010 11:19:20PM *  1 point [-]

I promise to take it seriously if you need to refer to Löb's theorem in your response. I once understood your cartoon guide and could again if need be.

If we concede that when people say "wrong", they're referring to the output of a particular function to which we don't have direct access, doesn't the problem still arise when we ask how to identify what function that is? In order to pin down what it is that we're looking for, in order to get any information about it, we have to interview human subjects. Out of all the possible judgment-specifying functions out there, what's special about this one is precisely the relationship humans have with it.

Comment deleted 31 January 2010 09:38:17PM *  [-]
Comment author: Eliezer_Yudkowsky 31 January 2010 11:02:26PM 1 point [-]

I truly and honestly say to you, Roko, that while you got most of my points, maybe even 75% of my points, there seems to be a remaining point that is genuinely completely lost on you. And a number of other people. It is a difficult point. People here are making fun of my attempt to explain it using an analogy to Lob's Theorem, as if that was the sort of thing I did on a whim, or because of being stupid. But... my dear audience... really, by this point, you ought to be giving me the benefit of the doubt about that sort of thing.

Also, it appears from the comment posted below and earlier that this mysterious missed point is accessible to, for example, Nick Tarleton.

It looks to me like the opposing position is not based on disagreement with this point but rather outright failure to understand what is being said.

Comment author: SilasBarta 31 January 2010 11:21:49PM *  13 points [-]

Well, you did make a claim about what is the right translation when speaking to babyeaters:

we and they are talking about a different subject matter and it is an error of the computer translation programs that the word comes out as "morality" in both cases. Morality is about how to save babies, not eat them, everyone knows that and they happen to be right. If we could get past difficulties of the translation, the babyeaters would agree with us about what is moral, we would agree with them about what is babyeating

But there has to be some standard by which you prefer the explanation "we mistranslated the term 'morality'" to "we disagree about morality", right? What is that? Presumably, one could make your argument about any two languages, not just ones with a species gap:

"We and Spaniards are talking about a different subject matter and it is an error of the computer translation programs that the word comes out as "morality" in both cases. Morality is about how to protect freedoms, not restrict them, everyone knows that and they happen to be right. If we could get past difficulties of the translation, the Spaniards would agree with us about what is moral, we would agree with them about what is familydutyhonoring."

ETA: A lot of positive response to this, but let me add that I think a better term in the last place would be something like "morality-to-Spaniards". The intuition behind the original phrasing was to show how you can redefine Spanish standards of morality to be "not-morality", but rather, just "things that we place different priority on".

But it's clearly absurd there: the correct translation of ética is not "ethics-to-Spaniards", but rather, just plain old "ethics". And the same reasoning should apply to the babyeather case.

Comment author: gregconen 02 February 2010 01:38:02PM *  4 points [-]

To go a step further, moral disagreement doesn't require a language barrier at all.

"We and abolitionists are talking about a different subject matter and it is an error of the "computer translation programs" that the word comes out as "morality" in both cases. Morality is about how to create a proper relationship between races, everyone knows that and they happen to be right. If we could get past difficulties of the "translation", the abolitionists would agree with us about what is moral, we would agree with them about what is abolitionism."

Comment author: timtyler 05 February 2010 07:29:48PM *  1 point [-]

Eliezer uses the word "should" in what seems to me to be a weird and highly counter-intuitive way.

Multiple people have advised him about this - but he seems to like his usage.

Comment author: Unknowns 01 February 2010 04:17:42AM *  -1 points [-]

As it is commonly understood, Eliezer is definitely NOT a moral relativist.

Comment author: komponisto 01 February 2010 04:22:14AM *  2 points [-]

(Downvoted for denying my claim without addressing my argument. That's very annoying.)

Comment author: MichaelBishop 03 February 2010 05:38:42PM *  0 points [-]

re: denying claim without addressing argument IMO, such comments are acceptable when the commenter is of high enough status in the community. Obviously I'd prefer they address the argument, but I consider myself better off just knowing that certain people agree or disagree.

ADDED: Note, I am merely stating my personal preference, not insisting that my personal preference become normatively binding on LW. I also happen to agree with Komponisto's judgment that Unknowns previous comment was unhelpful.

Comment author: komponisto 03 February 2010 05:50:36PM *  4 points [-]

I disagree.

ETA: Note that an implication of what you said is that replying in that manner constitutes an assertion of higher status than the other person; this is exactly why it is irritating.

Comment author: MichaelBishop 03 February 2010 06:31:13PM -1 points [-]

I think assertions of higher status can sometimes be characterized as justifiable or even desirable. Eliezer does this all the time. The alternative to "stating disagreement while failing to address the details of the argument," is often to ignore the comment altogether. (Also, see edit to my previous comment before replying further.)

Comment author: komponisto 03 February 2010 06:37:57PM 0 points [-]

Well, if you agree with me about that particular comment, maybe it would have been preferable to wait for an occasion where you actually disagreed with my judgment to make this point?

(This would help cut down on "fake disagreements", i.e. disagreements arising out of misunderstanding.)

Comment author: MichaelBishop 03 February 2010 06:49:53PM 1 point [-]

Agreed.

Comment author: MrHen 03 February 2010 06:06:27PM 0 points [-]

I think the manner in which komponisto was calling Eliezer a moral relativist deserves a more thorough answer. If I make an off-handed remark and someone disagrees with me, I find an off-handed remark fair. If I spend three paragraphs and get, "No," as a response I will be annoyed.

In this case, I side with komponisto.

Comment author: TheAncientGeek 29 May 2014 03:12:29PM *  0 points [-]

Not ndividual level relativism, or not group level relativism?

Comment author: Kevin 01 February 2010 04:56:11AM *  -1 points [-]

As I understand the common understanding, moral relativist commonly means not believing in absolute morality, which I think is pretty much all of us.

Comment author: blacktrance 29 May 2014 04:26:46PM *  0 points [-]

As I understand it, relativism doesn't mean "refers explicitly to particular agents". Suppose there's a morality-determining function that takes an agent's terminal values and their psychology/physiology and spits out what that agent should do. It would spit different things out for different agents, and even more different things for different kinds of agents (humans vs babyeaters). Nevertheless, this would not quite be moral relativism because it would still be the case that there's an objective morality-determining function that is to be applied to determine what one should do. Moral relativism would not merely say that there's no one right way one should act, it would also say that there's no one right way to determine how one should act.

Comment author: TheAncientGeek 29 May 2014 04:30:00PM *  0 points [-]

It's not objective, because it's results differ with differing terminal values. An objective morality machine would tell you what you should do, not tell you how to satisfy your values. Iow, morality isn't decision theory.

Comment author: blacktrance 29 May 2014 04:42:01PM 0 points [-]

An objective morality machine would tell you what you should do, not tell you how to satisfy your values

Why must the two be mutually exclusive? Why can't morality be about satisfying your values? One could say that morality properly understood is nothing more than the output of decision theory, or that outputs of decision theory that fall in a certain area labeled "moral questions" are morality.

Comment author: komponisto 29 May 2014 07:24:30PM 1 point [-]

Why can't morality be about satisfying your values?

Because that isn't how the term "morality" is typically used by humans. The "morality police" found in certain Islamic countries aren't life coaches. The Ten Commandments aren't conditional statements. When people complain about the decaying moral fabric of society, they're not talking about a decline in introspective ability.

Inherent to the concept of morality is the external imposition of values. (Not just decisions, because they also want you to obey the rules when they're not looking, you see?) Sociologically speaking, morality is a system for getting people to do unfun things by threatening ostracization.

Decision theory (and meta-decision-theory etc.) does not exist to analyze this concept (which is not designed for agents); it exists to replace it.

Comment author: bogus 31 May 2014 02:59:32PM *  0 points [-]

Because that isn't how the term "morality" is typically used by humans. The "morality police" found in certain Islamic countries aren't life coaches. The Ten Commandments aren't conditional statements. ... Inherent to the concept of morality is the external imposition of values.

Morality is about all of these things. and more besides. Although "outer" morality as embodied in moral codes and moral exemplars is definitely important, if there were no inner values for humans to care about in the first place, no one would be going around and imposing them on others, or even debating them in any way.

And it is a fact about the world that most basic moral values are shared among human societies. Morality may or may not be objective, but it is definitely intersubjective in a way that looks 'objective' to the casual observer.

Comment author: blacktrance 29 May 2014 08:39:47PM *  0 points [-]

"Morality" is used by humans in unclear ways and I don't know how much can be gained from looking at common usage. It's more sensible to look at philosophical ethical theories rather than folk morality - and there you'll find that moral internalism and ethical egoism are within the realm of possible moralities.

Comment author: TheAncientGeek 29 May 2014 08:34:18PM *  0 points [-]

Morality done right is about the voluntary and mutual adjustment of values ( or rather actions expressing them).

Morally done wrong can go two ways, one failure mode is hedonism, where the individual takes no notice of the preferences of others:; the other is authoritarianism, where "society" (rather, its representatives) imposes values that no-one likes or has a say in.

Comment author: TheAncientGeek 29 May 2014 04:53:09PM *  -1 points [-]

Note the word objective.

Comment author: blacktrance 29 May 2014 05:05:21PM *  0 points [-]

An objective morality machine would tell you the One True Objective Thing TheAncientGeek Should Do, given your values, but this thing need not be the same as The One True Objective Thing Blacktrance Should Do. The calculations it performs are the same in both cases (which is what makes it objective), but the outputs are different.

Comment author: TheAncientGeek 29 May 2014 06:22:11PM 0 points [-]

You are misusing "objective". How does your usage differ from telling me what i should do subjectively? How can.true-for-me-but-not-for-you clauses fail to indicate subjectivity? How cam it be coherent to say there is one truth, only it is different for everybody?

Comment author: Vaniver 29 May 2014 06:44:56PM 3 points [-]

A person's height is objectively measurable; that does not mean all people have the same height.

Comment author: TheAncientGeek 29 May 2014 07:01:33PM 0 points [-]

"True about person P" is objective.

"True for person P about X" is subjective.

Subjectivity is multiple truths about one thing, ie multiple claims about one thing, which are indexed to individuals, and which would be contradictory without the indexing.

Comment author: blacktrance 29 May 2014 06:44:43PM 0 points [-]

Saying it's true-for-me-but-not-for-you conflates two very different things: truth being agent-relative and descriptive statements about agents being true or false depending on the agent they're referring to. "X is 6 feet tall" is true when X is someone who's 6 feet tall and false when X is someone who's 4 feet tall, and in neither case is it subjective, even though the truth-value depends on who X is. Morality is similar - "X is the right thing for TheAncientGeek to do" is an objectively true (or false) statement, regardless of who's evaluating you. Encountering "X is the right thing to do if you're Person A and the wrong thing to do if you're Person B" and thinking moralitry subjective is the same sort of mistake as if you encountered the statement "Person A is 6 feet tall and Person B is not 6 feet tall" and concluded that height is subjective.

Comment author: TheAncientGeek 29 May 2014 07:12:13PM 0 points [-]

See my other reply.

Indexing statements about individuals to individuals is harmless. Subjectivity comes in when you index statements about something else to individuals.

Morally relevant actions are actions which potentially affect others

Your morality machine is subjective because I don't need to feed in anyone else's preferences, even though my actions will affect them.

Comment author: komponisto 29 May 2014 07:10:41PM *  0 points [-]

Morality is similar - "X is the right thing for TheAncientGeek to do" is an objectively true (or false) statement, regardless of who's evaluating you.

Not so! Rather, "X is the right thing for TheAncientGeek to do given TheAncientGeek's values" is an objectively true (or false) statement. But "X is the right thing for TheAncientGeek to do" tout court is not; it depends on a specific value system being implicitly understood.

Comment author: Vladimir_Nesov 01 February 2010 09:02:31AM *  0 points [-]

What makes the theory relativist is simply the fact that it refers explicitly to particular agents -- humans.

Unfortunately, it's not that easy. An agent, given by itself, doesn't determine preference. It probably does so to a large extent, but not entirely. There is no subject matter of "preference" in general. "Human preference" is already a specific question that someone has to state, that doesn't magically appear from a given "human". A "human" might only help (I hope) to pinpoint the question precisely, if you start in the general ballpark of what you'd want to ask.

I suspect that "Vague statement of human preference"+"human" is enough to get a question of "human preference", and the method of using the agent's algorithm is general enough for e.g. "Vague statement of human preference"+"babyeater" to get a precise question of "babyeater preference", but it's not a given, and isn't even expected to "work" for more alien agents, who are compelled by completely different kinds of questions (not that you'd have a way of recognizing such "error").

The reference to humans or babyeaters is in the method of constructing a preference-implementing machine, not in the concept itself. What humans are is not the info that compels you to define human preference in a particular way, although what humans are may be used as a tool in the definition of human preference, simply because you can pull the right levers and point to the chunks of info that go into the definition you choose.

[W]hy should we do what we prefer rather than what they prefer? The correct answer is, of course, "because that's what we prefer"

That's not a justification. They may turn out to do something right, where you were mistaken, and you'll be compelled to correct.

Comment author: komponisto 01 February 2010 11:17:00AM 0 points [-]

The reference to humans or babyeaters is in the method of constructing a preference-implementing machine, not in the concept itself.

Yes.