Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Stuart_Armstrong comments on Anthropic decision theory I: Sleeping beauty and selflessness - Less Wrong Discussion

10 Post author: Stuart_Armstrong 01 November 2011 11:41AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (34)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 02 November 2011 10:00:20AM 0 points [-]

You also need to know the impact of the agent's decision: "If do this, do I cause identical copies to do the same thing, or do I not?" See my next post for this.

Comment author: Manfred 02 November 2011 10:51:51AM 0 points [-]

And so if you know that, you could get the probability the agent assigns to various outcomes?

Comment author: Stuart_Armstrong 02 November 2011 12:10:16PM 0 points [-]


But notice that you need the three elements - utility function, probabilities and impact of decision - in order to figure out the decision. So if you observe only the decision, you can't get at any of the three directly.

With some assumptions and a lot of observation, you can disentangle the utility function from the other two, but in anthropic situations, you can't generally disentangle the anthropic probabilities from the impact of decision.

Comment author: Manfred 02 November 2011 12:49:54PM *  0 points [-]

Given only the decisions, you can't disentangle the probability from the utility function anyhow. You'd have to do something like ask nicely about the agent's utility or probability, or calculate from first principles, to get the other. So I don't feel like the situation is qualitatively different. If everything but the probabilities can be seen as a fixed property of the agent, the agent has some properties, and for each outcome it assigns some probabilities.

Comment author: Stuart_Armstrong 02 November 2011 12:56:44PM *  0 points [-]

A simplification: SIA + individual impact = SSA + total impact

ie if I think that worlds with more copies are more likely (but these are independent of me), this gives the same behaviour that if I believe my decision affects those of my copies (but worlds with many copies are no more likely).