Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Wei_Dai comments on Anthropic decision theory I: Sleeping beauty and selflessness - Less Wrong Discussion

10 Post author: Stuart_Armstrong 01 November 2011 11:41AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (34)

You are viewing a single comment's thread.

Comment author: Wei_Dai 03 November 2011 08:31:54PM *  3 points [-]

I think the paper's treatment (section 3.3.3) of "selfish" (i.e., indexically expressed) preferences is wrong, unless I'm not understanding it correctly. Assuming the incubator variant, what does your solution say a Beauty should do if we tell her that she is in Room 1 and ask her what price she would pay for a lottery ticket that pays $1 on Heads? Applying section 3.3.3 seems to suggest that she should also pay $0.50 for this ticket, but that is clearly wrong. Or rather, either that's wrong or it's wrong that she should pay $0.50 for the original lottery ticket where we didn't tell her her room number, because otherwise we can money-pump her and make her lose money with probability 1.

"Selfish" preferences are still very confusing to me, especially if copying or death is a future possibility. Are they even legitimate preferences, or just insanity that should be discarded (as steven0461 suggested)? If the former, should we convert them into non-indexically expressed preferences (i.e., instead of "Give me that chocolate bar", "Give that chocolate bar to X" where X is a detailed description of my body), or should our decision theory handle such preferences natively? (Note that UDT can't handle such preferences without prior conversion.) I don't know how to do either, and this paper doesn't seem to be supplying the solution that I've been looking for.