Kutta comments on The Preference Utilitarian’s Time Inconsistency Problem - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (104)
Well, making people's preferences coincide with the universe by adjusting people's preferences is not possible if people prefer their preferences not to be adjusted to the universe. Or possible only to the extent people currently prefer being changed.
Changing people or caring about future humans or other entities is basically a second guess about what current people care about. You do not need to manually add external factors to the utility function on the basis that you worry that these things "might be left out" of it. Anything that should be considered is already in the current CEV; people already care deeply about their future selves and future people, and care about some other non-human beings, such as animals.
Adding anything else to the equation seems to me just as arbitrary as picking the utility function of a random paperclip AI and trying to maximize it.