Rachel and Irene are walking home while discussing Newcomb's problem. Irene explains her position:
"Rational agents win. If I take both boxes, I end up with $1000. If I take one box, I end up with $1,000,000. It shouldn't matter why I'm making the decision; there's an obvious right answer here. If you walk away with less money yet claim you made the 'rational' decision, you don't seem to have a very good understanding of what it means to be rational".
Before Rachel can respond, Omega appears from around the corner. It sets two boxes on the ground. One is opaque, and the other is transparent. The transparent one clearly has $1000 inside. Omega says "I've been listening to your conversation and decided to put you to the test. These boxes each have fingerprint scanners that will only open for Irene. In 5 minutes, both boxes will incinerate their contents. The opaque box has $1,000,000 in it iff I predicted that Irene would not open the transparent box. Also, this is my last test of this sort, and I was programmed to self-destruct after I'm done." Omega proceeds to explode into tiny shards of metal.
Being in the sort of world where this kind of thing happens from time to time, Irene and Rachel don't think much of it. Omega has always been right in the past. (Although this is the first time it's self-destructed afterwards.) Irene promptly walks up to the opaque box and opens it, revealing $1,000,000, which she puts into her bag. She begins to walk away, when Rachel says:
"Hold on just a minute now. There's $1000 in that other box, which you can open. Omega can't take the $1,000,000 away from you now that you have it. You're just going to leave that $1000 there to burn?"
"Yup. I pre-committed to one-box on Newcomb's problem, since it results in me getting $1,000,000. The only alternative would have resulted in that box being empty, and me walking away with only $1000. I made the rational decision."
"You're perfectly capable of opening that second box. There's nothing physically preventing you from doing so. If I held a gun to your head and threatened to shoot you if you didn't open it, I think you might do it. If that's not enough, I could threaten to torture all of humanity for ten thousand years. I'm pretty sure at that point, you'd open it. So you aren't 'pre-committed' to anything. You're simply choosing not to open the box, and claiming that walking away $1000 poorer makes you the 'rational' one. Isn't that exactly what you told me that truly rational agents didn't do?"
"Good point", says Irene. She opens the second box, and goes home with $1,001,000. Why shouldn't she? Omega's dead.
In a Newcombless problem, where you can either have $1,000 or refuse it and have $1,000,000, you could argue that the rational choice is to take the $1,000,000, and then go back for the $1,000 when people's backs were turned, but it would seem to go against the nature of the problem.
In much the same way, if Omega is a perfect predictor, there is no possible world where you receive $1,000,000 and still end up going back for the second. Either Rachel wouldn't have objected, or the argument would've taken more than 5 minutes, and the boxes disappear, or something.
I'm not sure how Omega factors in the boxes' contents in this "delayed decision" version. Like, let's say Irene is will, absent external forces, one box, and Rachel, if Irene receives $1,000,000, will threaten Irene sufficiently to take the second box, and will do nothing if Irene receives nothing. (Also they're automatons, and these are descriptions of their source code, and so no other unstated factors are able to be taken into account)
Omega simulates reality A, with the box full, sees that Irene will 2 box after threat by Rachel.
Omega simulates reality B, with the box empty, and sees that Irene will 1 box.
Omega, the perfect predictor, cannot make a consistent prediction, and, like the unstoppable force meeting the immovable object, vanishes in a puff of logic.
I think, if you want to aim at this sort of thing, the better formulation is to just claim that Omega is 90% accurate. Then there's no (immediate) logical contradiction in receiving the $1,000,000 and going back for the second box. And the payoffs should still be correct.
1 box: .9*1,000,000 + .1*0 = 900,000
2 box: .9*1,000 + .1*1,001,000 = 101,000
I expect that this formulation runs folly of what was discussed in this post around the Smoking Lesions problem, where repeated trials may let you change things you shouldn't be able to (in their example, if you chose to smoke every time, then if the correlation between smoking and lesions was held, then you can change the base rate of the lesions).
That is, I expect that if you ran repeated simulations, to try things out, then strategies like "I will one box, and iff it is full, then I will go back for the second box" will make it so Omega is incapable of predicting at the proposed 90% rate.
I think all of these things might be related to the problem of embedded agency, and people being confused (even if they don't put it in these terms) that they have an atomic free will that can think about things without affecting or being affected by the world. I'm having trouble resolving this confusion myself, because I can't figure out what Omega's prediction looks like instead of vanishing in a puff of logic. It may just be that statements like "I will turn the lever on if, and only if, I expect the lever to be off at the end" are a nonsense decision criteria. But the problem as stated doesn't seem like it should be impossible, so... I am confused.