Wei_Dai comments on The Preference Utilitarian’s Time Inconsistency Problem - Less Wrong

25 Post author: Wei_Dai 15 January 2010 12:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (104)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 25 January 2010 08:56:56PM 1 point [-]

I think an FAI's values would reflect the programmers' values (unless it turns out there is Objective Morality or something else unexpected). My understanding now is that if Robin were the FAI's programmer, the weights he would give to other people in its utility function would depend on how much they helped him create the FAI (and for people who didn't help, how much the helpers care about them).

Comment author: denisbider 25 January 2010 09:04:45PM *  1 point [-]

Sounds plenty selfish to me. Indeed, no different than might-is-right.

Comment author: Wei_Dai 27 January 2010 01:11:46AM 3 points [-]

Sounds plenty selfish to me. Indeed, no different than might-is-right.

Instead of might-is-right, I'd summarize it as "might-and-the-ability-to-provide-services-to-others-in-exchange-for-what-you-want-is-right" and Robin would presumably emphasize the second part of that.

Comment author: Vladimir_Nesov 26 January 2010 10:17:36PM *  3 points [-]

You can care a lot about other people no matter how much they help you, but should help those who helps you even more for game-theoretic reasons. This doesn't at all imply "selfishness".