You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

DanielLC comments on Anthropic Decision Theory III: Solving Selfless and Total Utilitarian Sleeping Beauty - Less Wrong Discussion

3 Post author: Stuart_Armstrong 03 November 2011 10:04AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (3)

You are viewing a single comment's thread.

Comment author: DanielLC 03 November 2011 09:47:47PM 0 points [-]

Then by the temporal consistency axiom, this is indeed what her future copies will do.

They have information she doesn't.

Suppose there are one trillion person-days if she wakes up once, and one trillion one if she wakes up twice. Specifically, her now, her on heads, and her on tails have three separate pieces of information.

The probability of being sleeping beauty on the day before the experiment is one in one trillion if the coin lands on heads, and one in one trillion one if it lands on tails. This gives an odds ration of 1.000000000001:1.

The probability of being sleeping beauty during the experiment is one in one trillion if the coin lands on heads, and two in one trillion one if it lands on tails (since there are then two days it could be). This gives an odds ratio of 1.000000000001:2.

Since her future self has different information, it makes perfect sense for her to make a different choice.

There is some disagreement on whether or not probability works that way. (This is technically not an understatement. Some people agree with me.) Suppose it doesn't.

Assuming Sleeping beauty experiences exactly the same thing both days, she will get no additional and relevant information.

If the experiences aren't exactly identical, she's twice as likely to have a given experience if she wakes up twice. For example, if she rolls a die each time she wakes up, there's a one in six chance of rolling a six at least once if she wakes up once, but a 11 in 36 chance if she wakes up twice.

Comment author: Stuart_Armstrong 04 November 2011 10:49:47AM 1 point [-]

Then by the temporal consistency axiom, this is indeed what her future copies will do.

They have information she doesn't.

Then let's improve the axiom to get rid of that potential issue. Change it to something like:

"If an agent at two different times has the same preferences, then the past version will never give up anything of value in order to change the conditional decision of its future version. Here, conditional decision means the mapping from information to decision."