Beluga comments on Anthropic decision theory for selfish agents - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (37)
If the gnomes are created after the coin flip only, they are in exactly the same situation like the humans and we cannot learn anything by considering them that we cannot learn from considering the humans alone.
What this shows is that "Conditional on me existing, the gnome's utility function coincides with mine" is not a sufficient condition for "I should follow the advice that the gnome would have precommited to give".
What I propose is instead: "If conditional on me existing the gnome's utility function coincides with mine, and conditional on me not existing the gnome's utility function is a constant, then I should follow the advice that the gnome would have precommited to."
ETA: Speaking of indexicality-dependent utility functions here. For lexicality-independent utility functions, such as total or average utilitarianism, the principle simplifies to: "If the gnome's utility function coincides with mine, then I should follow the advice that the gnome would have precommited to."
I'm still not clear why lexicality-independent utility functions are different from their equivalent indexical versions.
I elaborated on this difference here. However, I don't think this difference is relevant for my parent comment. With indexical utility functions I simply mean selfishness or "selfishness plus hating the other person if another person exists", while with lexicality-independent utility functions I meant total and average utilitarianism.