Omega will either award you $1000 or ask you to pay him $100. He will award you $1000 if he predicts you would pay him if he asked. He will ask you to pay him $100 if he predicts you wouldn't pay him if he asked.
Omega asks you to pay him $100. Do you pay?
This problem is roughly isomorphic to the branch of Transparent Newcomb (version 1, version 2) where box B is empty, but it's simpler.
Here's a diagram:
It's "useless" in part because, as you note, it assumes Omega works by simulating the player. But mostly it's just that it subverts the whole point of the problem; Omega is supposed to have your complete trust in its infallibility. To say "maybe it's not real" goes directly against that. The situation in which Omega simulates itself is merely a way of restoring the original intent of infallibility.
This problem is tricky; since the decision-type "pay" is associated with higher rewards, you should pay, but if you are a person Omega asks to pay, you will not pay, as a simple matter of fact. So the wording of the question has to be careful - there is a distinction between counterfactual and reality - some of the people Omega counterfactually asks will pay, none of the people Omega really asks will successfully pay. Therefore what might be seen as mere grammatical structure has a huge impact on the answer - "If asked, would you pay?" vs. "Given that Omega has asked you, will you pay?"
Or, if you are thinking about it more precisely, it observes that however Omega works, it will be equivalent to Omega simulating the player. It just gives us something our intuitions can grasp at a little easier.