Desrtopa comments on Can anyone explain to me why CDT two-boxes? - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (136)
In a game with two moves, you want to model the other person, and play one level higher than that. So if I take the role of Omega and put you in Newcomb's problem, and you think I'll expect you to two box because you've argued in favor of two boxing, then you expect me to put money in only one box, so you want to one box, thereby beating your model of me. But if I expect you to have thought that far, then I want to put money in both boxes, making two boxing the winning move, thereby beating my model of you. And you expect me to have thought that far, you want to play a level above your model of me and one box again.
If humans followed this kind of recursion infinitely, it would never resolve and you couldn't do better than maximum entropy in predicting the other person's decision. But people don't do that, humans tend to follow very few levels of recursion when modeling others (example here, you can look at the comments for the results.) So if one person is significantly better at modeling the other, they'll have an edge and be able to do considerably better than maximum entropy in guessing the other person's choice.
Omega is a hypothetical entity who models the universe perfectly. If you decide to one box, his model of you decides to one box, so he plays a level above that and puts money in both boxes. If you decide to two box, his model of you decides to two box, so he plays a level above that and only puts money in one box. Any method of resolving the dilemma that you apply, his model of you also applies; if you decide to flip a coin, his model of you also decides to flip a coin, and because Omega models the whole universe perfectly, not just you, the coin in his model shows the same face as the coin you actually flip. This does essentially require Omega to be able to fold up the territory and put it in his pocket, but it doesn't require any backwards causality. Real life Newcomblike dilemmas involve predictors who are very reliable, but not completely infallible.
I could choose either, knowing that the results would be the same either way. Either I choose the money, in which case Omega has predicted that I will choose the money, and I get the money and don't get shot, or I choose the bullet, in which case, Omega has predicted that I choose the bullet, and I will get the money and not get shot. In this case, you don't need Omega's perfect prediction to avoid shooting the other person, you can just predict that they'll choose to get shot every time, because whether you're right or wrong they won't get shot, and if you want to shoot them, you should always predict that they'll choose the money, because predicting that they'll choose the money and having them choose the bullet is the only branch that results in shooting them. Similarly, if you're offered the dilemma, you should always pick the money if you don't want to get shot, and the bullet if you do want to get shot. It's a game with a very simple dominant strategy on each side.
I don't see why you think this would apply to Newcomb. Omega is not an "other person"; it has no motivation, no payoff matrix.
Really? If your decision theory allows you to choose either option, then how could Omega possibly predict your decision?
Whatever its reasons, Omega wants to set up the boxes so that if you one box, both boxes have money, and if you two box, only one box has money. It can be said to have preferences insofar as they lead to it using its predictive powers to try to do that.
I can't play at a higher level than Omega's model of me. Like playing against a stronger chess player, I can only predict that they will win. Any step where I say "It will stop here, so I'll do this instead," it won't stop there, and Omega will turn out to be playing at a higher level than me.
Because on some level my choice is going to be nonrandom (I am made of physical particles following physical rules,) and if Omega is an omniscient perfect reasoner, it can determine my choice in advance even if I can't.
But as it happens, I would choose the money, because choosing the money is a dominant strategy for anything up to absolute certainty in the other party's predictive abilities, and I'm not inclined to start behaving differently as soon as I theoretically have absolute certainty.
What you actually choose is one particular option (you may even strongly suspect in advance which one; and someone else might know it even better). "Choice" doesn't imply lack of determinism. If what you choose is something definite, it could as well be engraved on a stone tablet in advance, if it was possible to figure out what the future choice turns out to be. See Free will (and solution).