Stuart_Armstrong comments on Anthropic decision theory for selfish agents - Less Wrong

8 Post author: Beluga 21 October 2014 03:56PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (37)

You are viewing a single comment's thread. Show more comments above.

Comment author: Beluga 22 October 2014 03:47:02PM *  1 point [-]

First scenario: there is no such gnome. The number of gnomes is also determined by the coin flip, so every gnome will see a human. Then if we apply the reasoning from http://lesswrong.com/r/discussion/lw/l58/anthropic_decision_theory_for_selfish_agents/bhj7 , this will result with a gnome with a selfish human agreeing to x<$1/2.

If the gnomes are created after the coin flip only, they are in exactly the same situation like the humans and we cannot learn anything by considering them that we cannot learn from considering the humans alone.

Instead, let's now make the gnome in the head world hate the other human, if they don't have one themselves. The result of this is that they will agree to any x<$1, as they are (initially) indifferent to what happens in the heads world (potential gains, if they are the gnome with a human, as cancelled out by the potential loss, if they are the gnome without the human).

What this shows is that "Conditional on me existing, the gnome's utility function coincides with mine" is not a sufficient condition for "I should follow the advice that the gnome would have precommited to give".

What I propose is instead: "If conditional on me existing the gnome's utility function coincides with mine, and conditional on me not existing the gnome's utility function is a constant, then I should follow the advice that the gnome would have precommited to."

ETA: Speaking of indexicality-dependent utility functions here. For lexicality-independent utility functions, such as total or average utilitarianism, the principle simplifies to: "If the gnome's utility function coincides with mine, then I should follow the advice that the gnome would have precommited to."

Comment author: Stuart_Armstrong 22 October 2014 05:30:11PM 1 point [-]

I'm still not clear why lexicality-independent utility functions are different from their equivalent indexical versions.

Comment author: Beluga 22 October 2014 08:02:29PM 1 point [-]

I elaborated on this difference here. However, I don't think this difference is relevant for my parent comment. With indexical utility functions I simply mean selfishness or "selfishness plus hating the other person if another person exists", while with lexicality-independent utility functions I meant total and average utilitarianism.