This is an exploration of a way of looking at Newcomb’s Problem that helped me understand it. I hope somebody else finds it useful. I may add discussions of other game theory problems in this format if anybody wants them.
Consider Newcomb’s Problem:: Omega offers you two boxes, one transparent and containing $1000, the other opaque and containing either $1 million or nothing. Your options are to take both boxes, or only take the second one; but Omega has put money in the second box only if it has predicted that you will only take 1 box. A person in favor of one-boxing says, “I’d rather have a million than a thousand.” A two-boxer says, “Whether or not box B contains money, I’ll get $1000 more if I take box A as well. It’s either $1001000 vs. $1000000, or $1000 vs. nothing.” To get to these different decisions, the agents are working from two different ways of visualising the payoff matrix. The two-boxer sees four possible outcomes and the one-boxer sees two, the other two having very low probability.
The two-boxer’s payoff matrix looks like this:
Box B
|Money | No money|
Decision 1-box| $1mil | $0 |
2-box | $1001000| $1000 |
The outcomes $0 and $1001000 both require Omega making a wrong prediction. But as the problem is formulated, Omega is superintelligent and has been right 100 out of 100 times so far. So the one-boxer, taking this into account, describes the payoff matrix like this:
Box B
|Money | No money|
Decision 1-box| $1mil | not possible|
2-box | not possible| $1000 |
If Omega is really a perfect (nearly perfect) predictor, the only possible (not hugely unlikely) outcomes are $1000 for two-boxing and $1 million for one-boxing, and considering the other outcomes is an epistemic failure.
So your tentative solution is to break the problem in the same way as ata, by saying "well, what the problem really means is that you see someone who looks just like Omega pose you the problem, but it might be a simulation." (Note that Omega cannot simulate Omega for this to work, so the problem is genuinely different. If Omega could simulate Omega, it would have no need to simulate you with any uncertainty).
Let's see if I understand your more general statement - in this formulation of Newcomb's problem, it would be better if you picked box 1 even when it was empty. Therefore you should do something (anything) so that you will pick box 1 even if it is empty. Am I getting closer to what you think?
No, simulation is just one of the possibilities I listed way up-thread:
But it's n... (read more)