Manfred comments on Where do selfish values come from? - Less Wrong

27 Post author: Wei_Dai 18 November 2011 11:52PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (57)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 19 November 2011 12:47:17AM *  3 points [-]

It seems that in this post, by "selfish" you mean something like "not updateless" or "not caring about counterfactuals". A meaning closer to usual sense of the word would be, "caring about welfare of a particular individual" (including counterfactual instances of that individual, etc.), which seems perfectly amenable to being packaged as a reflectively consistent agent (that is not the individual in question) with world-determined utility function.

(A reference to usage in Stuart's paper maybe? I didn't follow it.)

Comment author: Manfred 19 November 2011 06:20:10AM *  2 points [-]

The usage in Stuart's posts on here just meant a certain way of calculating expected utilities. Selfish agents only used their own future utility when calculating expected utility, unselfish agents mixed in other peoples' utilities. To make this a bit more robust to redefinition of what's in your utility function, we could say that a purely selfish agent's expected utility doesn't change if actions stay the same but other peoples' utilities change.

But this is all basically within option (2).

Comment author: buybuydandavis 20 November 2011 10:55:45AM 0 points [-]

No one can mix another person's actual utility function into their own. You can mix in your estimate of it. You can mix in your estimate of what you think it should be. But the actual utility function of another person is in that other person, and not in you.

Comment author: Eugine_Nier 20 November 2011 08:16:18PM 1 point [-]

No one can mix another person's actual utility function into their own.

You can mix a pointer to it into your own. To see that this is different from mixing it your estimate, consider what you would do if you found out your estimate was mistaken.

Comment author: Manfred 20 November 2011 06:27:18PM *  1 point [-]

Good point, if not totally right.

In general, you can have anything in your utility function you please. I could care about the number of ducks in the pond near where I grew up, even though I can't see it. And when I say caring about the number of ducks in the pond, I don't just mean my perception of it - I don't want to maximize how many ducks I think are in the pond, or I would just drug myself. However, you're right that when calculating an "expected utility," that is, your best guess at the time, you don't usually have perfect information about other peoples' utility functions, just like I wouldn't have perfect information about the number of ducks in the pond, and so would have to use an estimate.

The reason it worked without this distinction in Stuart's articles on the sleeping beauty problem was because the "other people" were actually copies of Sleeping Beauty, so you knew that their utility functions were the same.