You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

jimrandomh comments on Another Argument Against Eliezer's Meta-Ethics - Less Wrong Discussion

9 Post author: Wei_Dai 05 February 2011 12:54AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (35)

You are viewing a single comment's thread. Show more comments above.

Comment author: jimrandomh 05 February 2011 04:42:58AM 1 point [-]

But how do you specify an idealized version of yourself that reasons about morality without using words like "moral", "right" and "should"?

You don't use those words, you refer to your brain as a whole, which happens to already contain those things, and specify extrapolation operations like time passing that it might go through. (Note that no one has nailed down what exactly the ideal extrapolation procedure would be, although there's some intuition about what is and isn't allowed. There is an implied claim there that different extrapolation procedures will tend to converge on similar results, although this is unlikely to be the case for every moral question or for quantitative moral questions at high precision.)

Comment author: Wei_Dai 05 February 2011 04:49:55AM 4 points [-]

I meant :

how do you specify an (idealized version of yourself that reasons about morality without using words like "moral", "right" and "should")?

But I think you interpreted me as:

how do you specify an (idealized version of yourself that reasons about morality) without using words like "moral", "right" and "should"?

Comment author: jimrandomh 05 February 2011 05:00:44AM 2 points [-]

Indeed I did misinterpret it that way. To answer the other interpretation of that question,

how do you specify an (idealized version of yourself that reasons about morality without using words like "moral", "right" and "should")?

The answer is, I don't think there's any problem with your idealized self using those words. Sure, it's self-referential, but self-referential in a way that makes stating that X is moral equivalent to returning, and asking whether Y is moral equivalent to recursing on Y. This is no different from an ordinary person thinking about a decision they're going to make; the statements "I decide X" and "I decide not-X" are both tautologically true, but this is not a contradiction because these are performatives, not declaratives.