I guess I'm not totally clear on how you're setting up the problem, then - I thought it was the same as in Eliezer's post.
Consider this extreme version though: let's call it "perverse Newcomb's problem with transparent boxes."
The way it works is that the boxes are transparent, so that you can see whether the million dollars is there or not (and as usual you can see $1000 in the other box). And the reason it's perverse is that Omega will only put the million dollars there if you will not take the box with the thousand dollars in it no matter what. Which means that if the million dollars for some reason isn't there, Omega expects you to take the empty box. And let's suppose that Omega has a one in a trillion error rate, so that there's a chance that you'll see the empty box even if you were honestly prepared to ignore the thousand dollars.
Note that this problem is different from the vanilla Newcomb's problem in a very important way: it doesn't just depend on what action you eventually take, it also depends on what actions you would take in other circumstances. It's like how in the Unexpected Hanging paradox, the prisoner who knows your strategy won't be surprised based on what day you hang them, but rather based on how many other days you could have hanged them.
You agree to play the perverse Newcomb's problem with transparent boxes (PNPTB), and you get just one shot. Omega gives some disclaimer (which I would argue is pointless, but may make you feel better) like "This is considered an artificially independent experiment. Your algorithm for solving this problem will not be used in my simulations of your algorithm for my various other problems. In other words, you are allowed to two-box here but one-box Newcomb's problem, or vice versa." Though of course Omega will still predict you correctly.
So you walk into the next room and....
a) see the boxes, with the million dollars in one box and the thousand dollars in the other. Do you one-box or two-box?
b) see the boxes, with one box empty and a thousand dollars in the other box. Do you take the thousand dollars or not?
I'd guess you avoided the thousand dollars in both scenarios. But suppose that you walk into the room and see scenario b and are a bit more conflicted than normal.
Omega gave that nice disclaimer about how no counterfactual selves would be impacted by this experiment, after all, so you really only gets one shot to make some money. Your options: either get $1000, or get nothing. So you take the $1000 - who can it hurt, right?
And since Omega predicted your actions correctly, Omega predicted that you would take the $1000, which is why you never saw the million.
Right, which would be silly, so I wouldn't do that.
Oh, I see what's confusing me. The "Interrupted" version of the classic Newcomb's Problem is this: replace Omega with DumbBot that doesn't even try to predict your actions, it just gives you outcomes at random. So you can't affect your counterfactual selves, and don't even bother - just two-box.
This problem - which I should rename to the Interrupted Ultimate Newcomb's Problem - does require Omega. It would look like this: from Omega's end, Omega simulates a jillion people, as you put it, and find...
While figuring out my error in my solution to the Ultimate Newcomb's Problem, I ran across this (distinct) reformulation that helped me distinguish between what I was doing and what the problem was actually asking.
... but that being said, I'm not sure if my answer to the reformulation is correct either.
The question, cleaned for Discussion, looks like this:
You approach the boxes and lottery, which are exactly as in the UNP. Before reaching it, you come to sign with a flashing red light. The sign reads: "INDEPENDENT SCENARIO BEGIN."
Omega, who has predicted that you will be confused, shows up to explain: "This is considered an artificially independent experiment. Your algorithm for solving this problem will not be used in my simulations of your algorithm for my various other problems. In other words, you are allowed to two-box here but one-box Newcomb's problem, or vice versa."
This is motivated by the realization that I've been making the same mistake as in the original Newcomb's Problem, though this justification does not (I believe) apply to the original. The mistake is simply this: that I assumed that I simply appear in medias res. When solving the UNP, it is (seems to be) important to remember that you may be in some very rare edge case of the main problem, and that you are choosing your algorithm for the problem as a whole.
But if that's not true - if you're allowed to appear in the middle of the problem, and no counterfactual-yous are at risk - it sure seems like two-boxing is justified - as khafra put it, "trying to ambiently control basic arithmetic".
(Speaking of which, is there a write up of ambient decision theory anywhere? For that matter, is there any compilation of decision theories?)
EDIT: (Yes to the first, though not under that name: Controlling Constant Programs.)