AlexMennen comments on Hacking the CEV for Fun and Profit - Less Wrong

52 Post author: Wei_Dai 03 June 2010 08:30PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (194)

You are viewing a single comment's thread. Show more comments above.

Comment author: AlexMennen 08 June 2010 02:04:46AM 7 points [-]

Simple solution: Build an FAI to optimize the universe to your own utility function instead of humanity's average utility function. They will be nearly the same thing anyway (remember, you were tempted to have the FAI use the average human utility function instead, so clearly, you sincerely care about other people's wishes). And in weird situations in which the two are radically different (like this one), your own utility function more closely tracks the intended purpose of an FAI.

Comment author: AlexMennen 09 June 2010 10:30:23PM *  4 points [-]

Here's what I've been trying to say: The thing that you want an FAI to do is optimize the universe to your utility function. That's the definition of your utility function. This will be very close to the average human utility function because you care about what other people want. If you do not want the FAI to do things like punishing people you hate (and I assume that you don't want that), then your utility function assigns a great weight to the desires of other people, and if an FAI with your utility function does such a thing, it must have been misprogrammed. The only reason to use the average human utility function instead is TDT: If that's what you are going to work towards, people are more likely to support your work. However, if you can convince them that on the average, your utility function is expected to be closer to theirs than the average human's is because of situations like this, then that should not be an issue.