You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Stuart_Armstrong comments on Anthropic Decision Theory IV: Solving Selfish and Average-Utilitarian Sleeping Beauty - Less Wrong Discussion

0 Post author: Stuart_Armstrong 04 November 2011 10:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (24)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 05 November 2011 01:30:06PM *  0 points [-]

The intuitive answer is <$0.99, but section 3.3.3 says the answer should be <$0.50

? I don't see this at all.

By section 3.3.3, I assume you mean the isomorphism between selfish and average-utilitarian? From an average utilitarian perspective (which is the same as a total utilitarian for fixed populations), buying that ticket for x after hearing "heads" will lose one person x in the tails world, and gain 99 people 1-x in the heads world. So the expected utility is (1/2)(1/100)(-x+99(1-x)), which is positive for x< 99/100.

ADT is supposed to reduce to a simplified version of UDT in non-anthropic situations; I didn't emphasise this aspect, as I know you don't want UDT published.

Comment author: Wei_Dai 06 November 2011 07:21:56AM 1 point [-]

? I don't see this at all.

Section 3.3.3 says that a selfish agent should make the same decisions as an average-utilitarian who averages over just the set of people who may be "me", right? That's why it says that in the incubator experiment, a selfish agent who has been told she is in Room 1 should pay 1/2 for the ticket. An average-utilitarian who averages over everyone who exists in a world would pay 2/3 instead.

So in my example, consider an average-utilitarian whose attention is restricted to just people who have heard "heads". Then buying a ticket loses an average of x in the tails world, and gains an average of 1-x in the heads world, so such an restricted-average-utilitarian would pay x<1/2.

(If this is still not making sense, please contact me on Google Chat where we can probably hash it out much more quickly.)

Comment author: Stuart_Armstrong 06 November 2011 11:48:49AM 1 point [-]

We'll talk on google chat. But my preliminary thought is that if you are indeed restricting to those who have heard heads, then you need to make use of the fact that this objectively much more likely to happen in the heads world than in the tails.