Vladimir_Nesov comments on Timeless Decision Theory and Meta-Circular Decision Theory - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (37)
Eliezer, one of your more recent comments finally prodded me into reading http://bayes.cs.ucla.edu/IJCAI99/ijcai-99.pdf (don't know why I waited so long), and I can now understand this comment much better. Except this part:
Under UDT1, when I'm trying to predict the consequences of choosing A6, I do want to assume that it has higher expected utility than A7. Because suppose my prediction subroutine sees that there will be another agent who is very similar to me, about to make the same decision, it should predict that it will also choose A6, right?
Now when the prediction subroutine returns, that assumption pops off the stack and goes away. I then call my utility evaluation routine to compute a utility for those predictions. There is no place for me to conclude "if I choose A6, it must have had higher utility than A7" in a form that would cause any problems.
Am I missing something here?
Remember the counterfactual zombie principle: you are only implication, your decision or your knowledge only says what it would be if you exist, but you can't assume that you do exist.
When you counterfactual-consider A6, you consider how the world-with-A6 will be, but don't assume that it exists, and so can't infer that it's of highest utility. You are right that your copy in world-with-A6 would also choose A6, but that also doesn't have to be an action of maximum utility, since it's not guaranteed the situation will exist. For the action that you do choose, you may know that you've chosen it, but for the action you counterfactually-consider, you don't assume that you do choose it. (In causal networks, this seems to correspond to cutting off the action-node from yourself before setting it to a value.)