PhilGoetz comments on indexical uncertainty and the Axiom of Independence - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (70)
It seems that my communication attempt failed badly last time, so let me try again. The "standard" approach to indexicals is to treat indexical uncertainty the same as any other kind of uncertainty. You compute a probability of being at each location, and then maximize expected utility. I tried to point out in this post that because decisions made at each location can interact non-linearly, this doesn't work.
You transformed my example into a game theory example, and the paradox disappeared, because game theory does take into account interactions between different players. Notice that in your game theory example, the computation that arrives at the solution looks nothing like an expected utility maximization involving probabilities of being at different locations. The probability of being at a location doesn't enter into the decision algorithm at all, so do such probabilities mean anything?
How does it not work?
If you are at a different location, that's a different world state. You compute the utility for each world state separately. Problem solved.
And to the folks who keep voting me down when I point out basically the same solution: State why you disagree. You've already taken 3 karma for me. Don't just keep taking karma for the same thing over and over without explaining why.
If the same world contains two copies of you, you can be either copy within the same world.
The same world does not contain two copies of you. You are confused about the meaning of "you".
Treat each of these two entities just the same way you treat every other agent in the world. If they are truly identical, it doesn't matter which one is "you".