TheOtherDave comments on Why didn't people (apparently?) understand the metaethics sequence? - Less Wrong

12 Post author: ChrisHallquist 29 October 2013 11:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (229)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 30 October 2013 12:37:44AM *  17 points [-]

I remain confused by Eliezer's metaethics sequence.

Both there and in By Which It May Be Judged, I see Eliezer successfully arguing that (something like) moral realism is possible in a reductionist universe (I agree), but he also seems to want to say that in fact (something like) moral realism actually obtains, and I don't understand what the argument for that is. In particular, one way (the way?) his metaethics might spit up something that looks a lot like moral realism is if there is strong convergence of values upon (human-ish?) agents receiving better information, time enough to work out contradictions in their values, etc. But the "strong convergence of values" thesis hasn't really been argued, so I remain unclear as to why Eliezer finds it plausible.

Basically, I read the metaethics sequence as asserting both things but arguing only for the first.

But I'm not sure about this. Perhaps because I was already familiar with the professional metaethics vocabulary when I read the sequence, I found Eliezer's vocabulary for talking about positions in metaethics confusing.

I meant to explore these issues in a vocabulary I find more clear, in my own metaethics sequence, but I still haven't got around to it. :(

Comment author: komponisto 01 November 2013 11:55:20AM *  4 points [-]

(I'm putting this as a reply to your comment because your comment is what made me think of it.)

In my view, Eliezer's "metaethics" sequence, despite its name, argues for his ethical theory, roughly

(1) morality[humans] = CEV[humans]

(N.B.: this is my terminology; Eliezer would write "morality" where I write "morality[humans]") without ever arguing for his (implied) metaethical theory, which is something like

(2) for all X, morality[X] = CEV[X].

Worse, much of his effort is spent arguing against propositions like

(3) (1) => for all X, morality[X] = CEV[humans] (The Bedrock of Morality: Arbitrary?)

and

(4) (1) => morality[humans] = CEV["humans"] (No License To Be Human)

which, I feel, are beside the point.

Comment author: TheOtherDave 01 November 2013 03:02:58PM *  2 points [-]

I would be surprised if Eliezer believed (1) or (2), as distinct from believing that CEV[X] is the most viably actionable approximation of morality[X] (using your terminology) we've come up with thus far.

This reminds me somewhat of the difference between believing that 2013 cryonics technology reliably preserves the information content of a brain on the one hand, and on the other believing that 2013 cryonics technology has a higher chance of preserving the information than burial or cremation.

I agree that that he devotes a lot of time to arguing against (3), though I've always understood that as a reaction to the "but a superintelligent system would be smart enough to just figure out how to behave ethically and then do it!" crowd.

I'm not really sure what you mean by (4).

Comment author: komponisto 02 November 2013 02:24:24AM 3 points [-]

I would be surprised if Eliezer believed (1) or (2), as distinct from believing that CEV[X] is the most viably actionable approximation of morality[X] (using your terminology) we've come up with thus far.

I didn't intend to distinguish that finely.

I'm not really sure what you mean by (4).

(4) is intended to mean that if we alter humans to have a different value system tomorrow, we would also be changing what we mean (today) by "morality". It's the negation of the assertion that moral terms are rigid designators, and is what Eliezer is arguing against in No License To Be Human.

Comment author: TheOtherDave 02 November 2013 01:08:41PM 1 point [-]

Ah, gotcha. OK, thanks for clarifying.