Stuart_Armstrong comments on Anthropic Decision Theory III: Solving Selfless and Total Utilitarian Sleeping Beauty - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (3)
Then let's improve the axiom to get rid of that potential issue. Change it to something like:
"If an agent at two different times has the same preferences, then the past version will never give up anything of value in order to change the conditional decision of its future version. Here, conditional decision means the mapping from information to decision."