You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Eliezer_Yudkowsky comments on SUDT: A toy decision theory for updateless anthropics - Less Wrong Discussion

15 Post author: Benja 23 February 2014 11:50PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (12)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 14 March 2014 05:29:34AM 2 points [-]

How is it coherent for an agent at time T1 to 'want' copy A at T2 to care only about A and copy B at T2 to care only about B? There's no non-meta way to express this - you would have to care more strongly about agents having a certain exact decision function than about all object-level entities at stake. When it comes to object-level things, whatever the agent at T1 coherently cares about, it will want A and B to care about.