You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Manfred comments on Anthropic Decision Theory IV: Solving Selfish and Average-Utilitarian Sleeping Beauty - Less Wrong Discussion

0 Post author: Stuart_Armstrong 04 November 2011 10:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (24)

You are viewing a single comment's thread. Show more comments above.

Comment author: Manfred 04 November 2011 07:03:44PM *  1 point [-]

Ah, good point. I made a mistake in translating the problem into selfish terms. In fact, that might actually solve the non-anthropic problem...

EDIT: Nope.

Comment author: Stuart_Armstrong 08 November 2011 10:48:22AM 0 points [-]

Why nope? ADT (with precommitements) simplifies to a version of UDT in non-anthropic situations.

Comment author: Manfred 08 November 2011 03:21:16PM 0 points [-]

The reason it doesn't solve the problem is because the people who want to donate to charity aren't doing it so that the other people also participating in the game will get utility - that is, they're altrusits, but not average utilitarians towards the other players. So the formulation is a little more complicated.

Comment author: Stuart_Armstrong 08 November 2011 05:11:43PM 0 points [-]

They're selfless, and have coordinated decisions with precommitments - ADT will then recreate the UDT formulation, since there are no anthropic issues to worry about. ADT + selflessness tends to SIA-like behaviour in the Sleeping beauty problem, which isn't the same as saying ADT says selfless agents should follow SIA.

Comment author: Manfred 08 November 2011 05:47:02PM *  0 points [-]

Well, yes, it recreates the UDT solution (or at least it does if it works correctly - I didn't actually check or anything). But the problem was never about just recreating the UDT solution - it's about understanding why the non-UDT solution doesn't work.

Comment author: Stuart_Armstrong 08 November 2011 07:07:26PM 0 points [-]

Because standard decision theory doesn't know how to deal properly with identical agents and common policies?