Omega will either award you $1000 or ask you to pay him $100. He will award you $1000 if he predicts you would pay him if he asked. He will ask you to pay him $100 if he predicts you wouldn't pay him if he asked.
Omega asks you to pay him $100. Do you pay?
This problem is roughly isomorphic to the branch of Transparent Newcomb (version 1, version 2) where box B is empty, but it's simpler.
Here's a diagram:
This is also isomorphic to the absent-minded driver problem with different utilities (and mixed strategies*), it seems. Specifically, if you consider the abstract idealized decision theory you implement to be "you", you make the same decision in two places, once in omega's brain while he predicts you and again if he asks you to pay up. Therefore the graph can be transformed from this
into this
which looks awfully like the absent minded driver. Interesting.
Additionally, modifying the utilities involved ($1000 -> death; swap -$100 and $0) gives Parfit's Hitchhiker.
Looks like this isn't really a new decision theory problem at all.
*ETA: Of course mixed strategies are allowed, if Omega is allowed to be an imperfect predictor. Duh. Clearly I wasn't paying proper attention...
I contend it's also isomorphic to the very real-world problems of hazing, abuse cycles, and akrasia.
The common dynamic across all these problems is that "You could have been in a winning or losing branch, but you've learned that you're in a losing branch, and your decision to scrape out a little more utility within that branch takes away more utility from (symmetric) versions of yourself in (potentially) winning branches."