Omega will either award you $1000 or ask you to pay him $100. He will award you $1000 if he predicts you would pay him if he asked. He will ask you to pay him $100 if he predicts you wouldn't pay him if he asked.
Omega asks you to pay him $100. Do you pay?
This problem is roughly isomorphic to the branch of Transparent Newcomb (version 1, version 2) where box B is empty, but it's simpler.
Here's a diagram:
Yes, you got it right. I love your use of the word "collapse" :-)
My argument seems to indicate that there's no easy way for UDT agents to solve such situations, because the problem statements really are incomplete. Do you see any way to fix that, e.g. in Parfit's Hitchhiker? Because this is quite disconcerting. Eliezer thought he'd solved that one.
I don't understand your argument. You've just broken Omega for some reason (by letting it know something true which it's not meant to know at that point), and as a result it fails in its role in the thought experiment. Don't break Omega.