Manfred comments on "Solving" selfishness for UDT - LessWrong

18 Post author: Stuart_Armstrong 27 October 2014 05:51PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (50)

You are viewing a single comment's thread. Show more comments above.

Comment author: Manfred 28 October 2014 02:12:50AM *  1 point [-]

1 - I don't have a general solution, there are plenty of things I'm confused about - and certain cases where anthropic probability depends on your action are at the top of the list. There is a sense in which a certain extension of UDT can handle these cases if you "pre-chew" indexical utility functions into world-state utility functions for it (like a more sophisticated version of what's described in this post, actually), but I'm not convinced that this is the last word.

Absurdity and confusion have a long (if slightly spotty) track record of indicating a lack in our understanding, rather than a lack of anything to understand.

2 - Same way that CDT gets the right answer on how much to pay for 50% chance of winning $1, even though CDT isn't correct. The Sleeping Beauty problem is literally so simple that it's within the zone of validity of CDT.

Comment author: lackofcheese 28 October 2014 02:47:02AM *  1 point [-]

On 1), I agree that "pre-chewing" anthropic utility functions appears to be something of a hack. My current intuition in that regard is to reject the notion of anthropic utility (although not anthropic probability), but a solid formulation of anthropics could easily convince me otherwise.

On 2), if it's within the zone of validity then I guess that's sufficient to call something "a correct way" of solving the problem, but if there is an equally simple or simpler approach that has a strictly broader domain of validity I don't think you can be justified in calling it "the right way".