Kutta comments on The Preference Utilitarian’s Time Inconsistency Problem - Less Wrong

25 Post author: Wei_Dai 15 January 2010 12:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (104)

You are viewing a single comment's thread.

Comment author: Kutta 15 January 2010 01:45:00PM *  2 points [-]

creating a bunch of new people whose preferences are more easily satisfied, or just use its super intelligence to persuade us to be more satisfied with the universe as it is.

Should the whole future of the universe be shaped only by the preferences of those who happen to be alive at some arbitrary point in time?

Well, making people's preferences coincide with the universe by adjusting people's preferences is not possible if people prefer their preferences not to be adjusted to the universe. Or possible only to the extent people currently prefer being changed.

Changing people or caring about future humans or other entities is basically a second guess about what current people care about. You do not need to manually add external factors to the utility function on the basis that you worry that these things "might be left out" of it. Anything that should be considered is already in the current CEV; people already care deeply about their future selves and future people, and care about some other non-human beings, such as animals.

Adding anything else to the equation seems to me just as arbitrary as picking the utility function of a random paperclip AI and trying to maximize it.