You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

cousin_it comments on SUDT: A toy decision theory for updateless anthropics - Less Wrong Discussion

15 Post author: Benja 23 February 2014 11:50PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (12)

You are viewing a single comment's thread.

Comment author: cousin_it 24 February 2014 09:31:16AM *  3 points [-]

Can you explain in more detail what you mean by "possible worlds"? I assume that the agent's counterfactual actions don't lead to new possible worlds in your model, e.g. "what would happen if I didn't pay up" isn't a possible world. So you're kinda assuming that all coinflips happen before all actions. But what if Omega decides to flip a coin based on the agent's action, or something like that?

ETA: would a single player extensive-form game (with incomplete information and imprefect information/recall) be a good model of SUDT?

Comment author: cousin_it 25 February 2014 11:56:08AM *  0 points [-]

After chatting with Benja about my comment and thinking some more, I wrote a reply to this post.