You don't count a contradiction (perfect predictor being wrong) as "odd?"
As I argued above you haven't actually observed a perfect predictor being wrong at that point.
Oh, okay. So when you said "imagine box 1 is empty," you didn't actually mean to treat box 1 as empty - that wasn't supposed to be "real,"
Not quite, as repeatedly said it doesn't matter what you think (or even if you think anything), just that the mere reality of the situation should not change how you act. If the only way you can manage that is to pretend it's not real then so be it.
and you agree that if it was real, the logic of the problem would compel you to take box 2.
No, it doesn't. The logic of the problem merely predicts that will happen because you are a two-boxer only pretending to be a one-boxer. You still can (and should) choose to one-box, and there is (as stated) no outside force compelling you. You shouldn't be very surprised when you do find an outside force compelling you, but it won't be the logic of the problem, unless you let it (and you shouldn't).
Rather than treating it like a normal hypothetical, your intent is to precommit to one-boxing even if box 1 is empty so that it won't be.
If you want to put it that way. Anyone who wants the $1,000,000 in a transparent box Newcomb problem has to be prepared to do the same.
No, it doesn't. The logic of the problem merely predicts that will happen because you are a two-boxer only pretending to be a one-boxer. You still can (and should) choose to one-box
See, this is what I find unusual. You predict that you will one-box and you also predict that this would cause a contradiction with the assumptions of the problem. This is like saying "I predict I will prove that 2=3 at noon tomorrow," and yet you don't see the oddness. Again, the fact that a proof exists (of something like "this formulation of newcomb's problem with transparent boxes is inconsistent") is as good as the proof itself.
This is an exploration of a way of looking at Newcomb’s Problem that helped me understand it. I hope somebody else finds it useful. I may add discussions of other game theory problems in this format if anybody wants them.
Consider Newcomb’s Problem:: Omega offers you two boxes, one transparent and containing $1000, the other opaque and containing either $1 million or nothing. Your options are to take both boxes, or only take the second one; but Omega has put money in the second box only if it has predicted that you will only take 1 box. A person in favor of one-boxing says, “I’d rather have a million than a thousand.” A two-boxer says, “Whether or not box B contains money, I’ll get $1000 more if I take box A as well. It’s either $1001000 vs. $1000000, or $1000 vs. nothing.” To get to these different decisions, the agents are working from two different ways of visualising the payoff matrix. The two-boxer sees four possible outcomes and the one-boxer sees two, the other two having very low probability.
The two-boxer’s payoff matrix looks like this:
Box B
|Money | No money|
Decision 1-box| $1mil | $0 |
2-box | $1001000| $1000 |
The outcomes $0 and $1001000 both require Omega making a wrong prediction. But as the problem is formulated, Omega is superintelligent and has been right 100 out of 100 times so far. So the one-boxer, taking this into account, describes the payoff matrix like this:
Box B
|Money | No money|
Decision 1-box| $1mil | not possible|
2-box | not possible| $1000 |
If Omega is really a perfect (nearly perfect) predictor, the only possible (not hugely unlikely) outcomes are $1000 for two-boxing and $1 million for one-boxing, and considering the other outcomes is an epistemic failure.