komponisto comments on Strong moral realism, meta-ethics and pseudo-questions. - Less Wrong

18 [deleted] 31 January 2010 08:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (172)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 31 January 2010 11:05:52PM 4 points [-]

surely there's a causal relation between humans' instantiating the computation and Eliezer's referring to it.

Of course there's a causal relation which explains the causal fact of this reference, but this causal explanation is not the same as the moral justification, and it's not appealed to as the moral justification. We shouldn't save babies because-morally it's the human thing to do but because-morally it's the right thing to do. What physically causes us to save the babies is a combination of the logical fact that saving babies is the right thing to do, and the physical fact that we are compelled by those sorts of logical facts. What makes saving the baby the right thing to do is a logical fact about the subject matter of rightness - in this case, a pretty fast and primitive implication from the premises that are baked into that subject matter and which distinguish it from the subject matter of wrongness. The physical fact that humans are compelled by these sorts of logical facts is not one of the facts which makes saving the baby the right thing to do. If I did assert that this physical fact was involved, I would be a moral relativist and I would say the sorts of other things that moral relativists say, like "If we wanted to eat babies, then that would be the right thing to do."

Comment author: komponisto 01 February 2010 01:37:58AM *  3 points [-]

Of course there's a causal relation which explains the causal fact of this reference, but this causal explanation is not the same as the moral justification, and it's not appealed to as the moral justification

Of course it isn't, because we're doing meta-ethics here, and don't yet have access to the notion of "moral justification"; we're in the process of deciding which kinds of things will be used as "moral justification".

It's your metamorality that is human-dependent, not your morality; see my other comment.

Comment author: Eliezer_Yudkowsky 01 February 2010 01:44:33AM *  3 points [-]

Now I'm confused. I don't understand how you can have preferences that you use to decide what ought to count as a "moral justification" without already having a moral reference frame.

Since we don't have conscious access to our premises, and we haven't finished reflecting on them, we sometimes go around studying our own conclusions in an effort to discover what counts as a moral justification, but that's not like a philosopher of pure emptiness constructing justificationness from scratch and appeal to some mysterious higher criterion. (Bearing in mind that when someone offers me a higher criterion, it usually ends up looking pretty uninteresting.)

Comment author: komponisto 01 February 2010 03:53:06AM *  6 points [-]

I don't understand how you can have preferences that you use to decide what ought to count as a "moral justification" without already having a moral reference frame.

Well, consider an analogy from mathematical logic: when you write out a formal proof that 2+2 = 4, at some point in the process, you'll end up concatenating two symbols here and two symbols there to produce four symbols; but this doesn't mean you're appealing to the conclusion you're trying to prove in your proof; it just so happens that your ability to produce the proof depends on the truth of the proposition.

Similarly, when an AI with Morality programmed into it computes the correct action, it just follows the Morality algorithm directly, which doesn't necessarily refer explicitly to "humans" as such. But human programmers had to program the Morality algorithm into the AI in the first place; and the reason they did so is because they themselves were running something related to the Morality algorithm in their own brains. That, as you know, doesn't imply that the AI itself is appealing to "human values" in its actual computation (the Morality program need not make such a reference); but it does imply that the meta-ethical theory used by the programmers compelled them to (in an appropriate sense) look at their own brains to decide what to program into the AI.

Comment author: TheAncientGeek 29 May 2014 03:01:09PM *  1 point [-]

That would be epistemic preferences. It's epistemology (and allied fields, like logic and rationality) thatreally runs into circularity problems.