Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Manfred comments on Anthropic decision theory I: Sleeping beauty and selflessness - Less Wrong Discussion

10 Post author: Stuart_Armstrong 01 November 2011 11:41AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (34)

You are viewing a single comment's thread. Show more comments above.

Comment author: Manfred 02 November 2011 12:49:54PM *  0 points [-]

Given only the decisions, you can't disentangle the probability from the utility function anyhow. You'd have to do something like ask nicely about the agent's utility or probability, or calculate from first principles, to get the other. So I don't feel like the situation is qualitatively different. If everything but the probabilities can be seen as a fixed property of the agent, the agent has some properties, and for each outcome it assigns some probabilities.

Comment author: Stuart_Armstrong 02 November 2011 12:56:44PM *  0 points [-]

A simplification: SIA + individual impact = SSA + total impact

ie if I think that worlds with more copies are more likely (but these are independent of me), this gives the same behaviour that if I believe my decision affects those of my copies (but worlds with many copies are no more likely).