This is an exploration of a way of looking at Newcomb’s Problem that helped me understand it. I hope somebody else finds it useful. I may add discussions of other game theory problems in this format if anybody wants them.
Consider Newcomb’s Problem:: Omega offers you two boxes, one transparent and containing $1000, the other opaque and containing either $1 million or nothing. Your options are to take both boxes, or only take the second one; but Omega has put money in the second box only if it has predicted that you will only take 1 box. A person in favor of one-boxing says, “I’d rather have a million than a thousand.” A two-boxer says, “Whether or not box B contains money, I’ll get $1000 more if I take box A as well. It’s either $1001000 vs. $1000000, or $1000 vs. nothing.” To get to these different decisions, the agents are working from two different ways of visualising the payoff matrix. The two-boxer sees four possible outcomes and the one-boxer sees two, the other two having very low probability.
The two-boxer’s payoff matrix looks like this:
Box B
|Money | No money|
Decision 1-box| $1mil | $0 |
2-box | $1001000| $1000 |
The outcomes $0 and $1001000 both require Omega making a wrong prediction. But as the problem is formulated, Omega is superintelligent and has been right 100 out of 100 times so far. So the one-boxer, taking this into account, describes the payoff matrix like this:
Box B
|Money | No money|
Decision 1-box| $1mil | not possible|
2-box | not possible| $1000 |
If Omega is really a perfect (nearly perfect) predictor, the only possible (not hugely unlikely) outcomes are $1000 for two-boxing and $1 million for one-boxing, and considering the other outcomes is an epistemic failure.
No, simulation is just one of the possibilities I listed way up-thread:
But it's not my favored conclusion, because it leads to doing silly things like holding off deciding so you are simulated for a longer time and exist longer, as you suggested. My favored one is the last one, that you don't exist, at all, not even inside a simulation or a Tegmark IV type of thing. After one-boxing you'd (hypothetically) switch to the Tegmark IV version of course (or Omega just being wrong, nothing differentiating those).
I don't specifically disagree with anything in particular here, but you sound as if you would draw conclusions from that I wouldn't draw.
Well, the possibilities listed up-thread other than "you don't exist" make the problem no longer exactly Newcomb's problem, unless you two-box. So I like your favorite, although I'm probably thinking of a stricter version of "don't exist" that makes it more nonsensical to talk about "what would you (who don't exist) do?"
E.g. if carrots didn't exist, what would the carrots that don't exist taste like? :D