You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Manfred comments on Questions for Moral Realists - Less Wrong Discussion

4 Post author: peter_hurford 13 February 2013 05:44AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (110)

You are viewing a single comment's thread. Show more comments above.

Comment author: Manfred 13 February 2013 07:19:18PM *  0 points [-]

I think the standard LW argument for there being only one morality is based on the psychological unity of mankind.

I think you're mixing up CEV with morality. CEV is an instance of the strategy "cooperate with humans" in some sort of AI-building prisoner's dilemma. It gives the AI some preferences, and the only guarantee that those preferences will be good is that humans are similar.

There is "only one" "morality" (kinda) because when I say "this is right" I am executing a function, and functions are unique-ish. But Me.right can be different from You.right. You just happen to be wrong sometimes, because You.right isn't right, because when I say right I mean Me.right.

So that "good" from the first paragraph would be Me.good, not CEV.good.

Comment author: Qiaochu_Yuan 13 February 2013 07:43:07PM -1 points [-]

You don't think morality should just be CEV?

Comment author: Manfred 13 February 2013 09:04:09PM 1 point [-]

It is a factual statement that when I say something is "right," I don't mean CEV.right, I mean Me.right, and I'm not even particularly trying to approximate CEV.

Comment author: RomeoStevens 14 February 2013 05:01:06AM -2 points [-]

A "winning" CEV should result in people with wildly divergent moralities all being deliriously happy.