Omega will either award you $1000 or ask you to pay him $100. He will award you $1000 if he predicts you would pay him if he asked. He will ask you to pay him $100 if he predicts you wouldn't pay him if he asked.
Omega asks you to pay him $100. Do you pay?
This problem is roughly isomorphic to the branch of Transparent Newcomb (version 1, version 2) where box B is empty, but it's simpler.
Here's a diagram:
Leaving open the question of whether Omega must work by simulating the Player, I don't understand why you say this is a 'useless way out'. So for now let's suppose Omega does simulate the Player.
Why would Omega choose to, or need to, ensure that in its simulation, the data received by the Player equals Omega's actual output?
There must be an answer to the question of what the Player would do if asked, by a being that it believes is Omega, to pay $100. Even if (as cousin_it may argue) the answer is "go insane after deducing a contradiction", and then perhaps fail to halt. To get around the issue of not halting, we can either stipulate that if the Player doesn't halt after a given length of time then it refuses to pay by default, or else that Omega is an oracle machine which can determine whether the Player halts (and interprets not halting as refusal to pay).
Having done the calculation, Omega acts accordingly. None of this requires Omega to simulate itself.
It's "useless" in part because, as you note, it assumes Omega works by simulating the player. But mostly it's just that it subverts the whole point of the problem; Omega is supposed to have your complete trust in its infallibility. To say "maybe it's not real" goes directly against that. The situation in which Omega simulates itself is merely a way of restoring the original intent of infallibility.
This problem is tricky; since the decision-type "pay" is associated with higher rewards, you should pay, but if you are a person... (read more)