This is an exploration of a way of looking at Newcomb’s Problem that helped me understand it. I hope somebody else finds it useful. I may add discussions of other game theory problems in this format if anybody wants them.
Consider Newcomb’s Problem:: Omega offers you two boxes, one transparent and containing $1000, the other opaque and containing either $1 million or nothing. Your options are to take both boxes, or only take the second one; but Omega has put money in the second box only if it has predicted that you will only take 1 box. A person in favor of one-boxing says, “I’d rather have a million than a thousand.” A two-boxer says, “Whether or not box B contains money, I’ll get $1000 more if I take box A as well. It’s either $1001000 vs. $1000000, or $1000 vs. nothing.” To get to these different decisions, the agents are working from two different ways of visualising the payoff matrix. The two-boxer sees four possible outcomes and the one-boxer sees two, the other two having very low probability.
The two-boxer’s payoff matrix looks like this:
Box B
|Money | No money|
Decision 1-box| $1mil | $0 |
2-box | $1001000| $1000 |
The outcomes $0 and $1001000 both require Omega making a wrong prediction. But as the problem is formulated, Omega is superintelligent and has been right 100 out of 100 times so far. So the one-boxer, taking this into account, describes the payoff matrix like this:
Box B
|Money | No money|
Decision 1-box| $1mil | not possible|
2-box | not possible| $1000 |
If Omega is really a perfect (nearly perfect) predictor, the only possible (not hugely unlikely) outcomes are $1000 for two-boxing and $1 million for one-boxing, and considering the other outcomes is an epistemic failure.
You can only correctly predict that something odd happens if you know you will still one-box, and you can only know you will still one-box if you are still going to one-box. As long as you model the problem as you still having the choice to two-box you haven't observed anything odd happening.
All yous observing an empty box behave the the same unless there is something else differentiating them, which (in the scenario considered) there is not unless you incorporate sufficient randomness in your decision making process, which you have no reason to want to do. The only way for the counterfactual you in case of an empty box that determines the state of the box to one-box so that the real you can get the $1,000,000 is for the real you to also one-box in a hypothetical encounter with the empty box. The only way you could actually encounter the empty box is if you two-box after encountering it, which you should not want to do.
I'm not assuming the actual existence of more than one you. Just the existence of at least one real you that matters. If you break your precommitment and two-box just because you see an empty box the real you is losing out on the $1,000,000. It doesn't matter how you reconcile the apparent existence of the situation, the apparent emptiness of the box, Omega's infailability and your precommitment as long as that reconciliation doesn't lead to breaking the precommitment, you can worry about that afterwards (personally I'm leaning towards assuming you don't exist and are just a couterfactual).
You don't count a contradiction (perfect predictor being wrong) as "odd?"
Oh, okay. So when you said "imagine box 1 is empty," you didn't actually mean to treat box 1 as empty - that wasn't supposed to be "real," and you agree that if it was real, the logic of the problem would compel you to take box 2. Rather than treating it like a normal hypothetical, your intent is to precommit to one-boxing even if box 1 is empty so that it won't be. Does that sound right?