Omega will either award you $1000 or ask you to pay him $100. He will award you $1000 if he predicts you would pay him if he asked. He will ask you to pay him $100 if he predicts you wouldn't pay him if he asked.
Omega asks you to pay him $100. Do you pay?
This problem is roughly isomorphic to the branch of Transparent Newcomb (version 1, version 2) where box B is empty, but it's simpler.
Here's a diagram:
I have sympathy for the commenters who agreed to pay outright (Nesov and ata), but viewed purely logically, this problem is underdetermined, kinda like Transparent Newcomb's (thx Manfred). This is a subtle point, bear with me.
Let's assume you precommit to not pay if asked. Now take an Omega that strictly follows the rules of the problem, but also has one additional axiom: I will award the player $1000 no matter what. This Omega can easily prove that the world in which it asks you to pay is logically inconsistent, and then it concludes that in that world you do agree to pay (because a falsity implies every statement, and this one happened to come first lexicographically or something). So Omega decides to award you $1000, its axiom system stays perfectly consistent, and all the conditions of the problem are fulfilled. I stress that the statement "You would pay if Omega asked you to" is logically true in the axiom system outlined, because its antecedent is false.
In summary, the system of logical statements that specifies the problem does not completely determine what will happen, because we can consistently extend it with another axiom that makes Omega cooperate even if you defect. IOW, you can't go wrong by cooperating, but some correct Omegas will reward defectors as well. It's not clear to me if this problem can be "fixed".
ETA: it seems that several other decision problems have a similar flaw. In Counterfactual Mugging with a logical coin it makes some defectors win, as in our problem, and in Parfit's Hitchhiker it makes some cooperators lose.
The solution has nothing to do with hacking the counterfactual; the reflectively consistent (and winning) move is to pay the $100, as precommitting to do so nets you a guaranteed $1000 (unless omega can be wrong). It is true that "The player will pay iff asked" implies "The player will not be asked" and therefore "The player will not pay", but this does not cause omega to predict the player to not pay when asked.