Vladimir_Nesov comments on The Anthropic Trilemma - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (218)
Do that. Isn't as straightforward as it perhaps looks, I still have no idea how to approach the problem of anticipation. (Also, "total quality of simulated observer moments"?)
Do you mean try to reverse engineer a notion of anticipation, or try to dissolve the question?
For the first, I mean to define anticipation in terms of what wagers you would make. In this case, how you treat a wager depends on whether having a simulation win the wager causes something good to happen to your utility function in one simulated copy, or in a million of them. Is that fair enough? I don't see why we care about anticipation at all, except as it bears on our decision making.
I don't really understand how the second question is difficult. Whatever strategy you choose, you can predict exactly what will happen. So as long as you can compare the outcomes, you know what you should do. If you care about the number of simulated paperclips that are ever created, then you should take an even paperclip bet on whether you won the lottery if the paperclips would be created before the extra simulations are destroyed. Otherwise, you shouldn't.
How do you describe a utility function that cares twice as much what happens to a consciousness which is being simulated twice?