Omega will either award you $1000 or ask you to pay him $100. He will award you $1000 if he predicts you would pay him if he asked. He will ask you to pay him $100 if he predicts you wouldn't pay him if he asked.
Omega asks you to pay him $100. Do you pay?
This problem is roughly isomorphic to the branch of Transparent Newcomb (version 1, version 2) where box B is empty, but it's simpler.
Here's a diagram:
This takes the decision out of Omega's hands and collapses Omega's agent-provability by letting it know its decision. We already know that in ADT-style decision-making, all theories of consequences of actions other than the actual one are inconsistent, that they are merely agent-consistent, and adding an axiom specifying which action is actual won't disturb consistency of the theory of consequences of the actual action. But there's no guarantee that Omega's decision procedure would behave nicely when faced with knowledge of inconsistency. For example, instead of concluding that you do agree to pay, it could just as well conclude that you don't, which would be a moral argument to not award you the $1000, and then Omega just goes crazy. One isn't meant to know own decisions, bad for sanity.
Yes, you got it right. I love your use of the word "collapse" :-)
My argument seems to indicate that there's no easy way for UDT agents to solve such situations, because the problem statements really are incomplete. Do you see any way to fix that, e.g. in Parfit's Hitchhiker? Because this is quite disconcerting. Eliezer thought he'd solved that one.