lackofcheese comments on Simulation argument meets decision theory - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (54)
The idea is that they have the same utility function, but the utility function takes values over anthropic states (values of "I").
U(I am X and X chooses sim) = 1
U(I am Xi and Xi chooses sim) = 0.2 etc.
I don't like it, but I also don't see an obvious way to reject the idea.