jimmy comments on The Preference Utilitarian’s Time Inconsistency Problem - Less Wrong

25 Post author: Wei_Dai 15 January 2010 12:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (104)

You are viewing a single comment's thread.

Comment author: jimmy 15 January 2010 04:56:58AM 6 points [-]

The AI is now reflectively consistent, but is this the right outcome?

I'd say so.

I wan't the AI to maximize my utility, and not dilute the optimization power with anyone else's preferences (by definition). Of course, to the extent that I care about others they will get some weight under my utility function, but any more than that is not something I'd wan't.

Anything else is just cooperation, which is great, since it greatly increases the chance of it working- and even more so the chance of it working for you. The group of all people the designers can easily trade with is the right group to do some average over.

The group of people alive at the time is the easiest group to trade with, but there are ways of trading with the dead and there has been talk about trading with other possible worlds