fubarobfusco comments on CEV: a utilitarian critique - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (94)
Of course we can morally compare types A and B, just as we can morally compare an AI whose goal is to turn the world into paperclips and one whose goal is to make people happy.
However, rather than "objectively better", we could be more clear by saying "more in line with our morals" or some such. It's not as if our morals came from nowhere, after all.
See also: "The Bedrock of Morality: Arbitrary?"
I don't think we'd be more clear by saying this, I think we'd be (at least partially) wrong.
Let's compare two worlds: World1 contains a population of pigs that are all constantly superhappy. World2 contains a population of pigs that are all constantly supermiserable. Clearly, World1 is objectively better than World2. If some morals deny this, they are wrong.
Things can only be good/bad for conscious beings (not for rocks, e.g.). So insofar the world takes the form of consciousness that gets what's good for it, it's objectively the case that something good has occurred in/for the world.