Omega will either award you $1000 or ask you to pay him $100. He will award you $1000 if he predicts you would pay him if he asked. He will ask you to pay him $100 if he predicts you wouldn't pay him if he asked.
Omega asks you to pay him $100. Do you pay?
This problem is roughly isomorphic to the branch of Transparent Newcomb (version 1, version 2) where box B is empty, but it's simpler.
Here's a diagram:
...Huh? My version of Omega doesn't bother predicting the agent, so you gain nothing by crippling its prediction abilities :-)
ETA: maybe it makes sense to let Omega have a "trembling hand", so it doesn't always do what it resolved to do. In this case I don't know if the problem stays or goes away. Properly interpreting "counterfactual evidence" seems to be tricky.
I would consider an Omega that didn't bother predicting in even that case to be 'broken'. Omega is good when it comes to good faith natural language implementation. Perhaps I would consider it one of Omega's many siblings, one that requires more formal shackles.