Manfred comments on Questions for Moral Realists - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (110)
I think you're mixing up CEV with morality. CEV is an instance of the strategy "cooperate with humans" in some sort of AI-building prisoner's dilemma. It gives the AI some preferences, and the only guarantee that those preferences will be good is that humans are similar.
There is "only one" "morality" (kinda) because when I say "this is right" I am executing a function, and functions are unique-ish. But Me.right can be different from You.right. You just happen to be wrong sometimes, because You.right isn't right, because when I say right I mean Me.right.
So that "good" from the first paragraph would be Me.good, not CEV.good.
You don't think morality should just be CEV?
It is a factual statement that when I say something is "right," I don't mean CEV.right, I mean Me.right, and I'm not even particularly trying to approximate CEV.
A "winning" CEV should result in people with wildly divergent moralities all being deliriously happy.