You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Stuart_Armstrong comments on Anthropic Decision Theory IV: Solving Selfish and Average-Utilitarian Sleeping Beauty - Less Wrong Discussion

0 Post author: Stuart_Armstrong 04 November 2011 10:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (24)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 08 November 2011 05:11:43PM 0 points [-]

They're selfless, and have coordinated decisions with precommitments - ADT will then recreate the UDT formulation, since there are no anthropic issues to worry about. ADT + selflessness tends to SIA-like behaviour in the Sleeping beauty problem, which isn't the same as saying ADT says selfless agents should follow SIA.

Comment author: Manfred 08 November 2011 05:47:02PM *  0 points [-]

Well, yes, it recreates the UDT solution (or at least it does if it works correctly - I didn't actually check or anything). But the problem was never about just recreating the UDT solution - it's about understanding why the non-UDT solution doesn't work.

Comment author: Stuart_Armstrong 08 November 2011 07:07:26PM 0 points [-]

Because standard decision theory doesn't know how to deal properly with identical agents and common policies?