Omega will either award you $1000 or ask you to pay him $100. He will award you $1000 if he predicts you would pay him if he asked. He will ask you to pay him $100 if he predicts you wouldn't pay him if he asked.
Omega asks you to pay him $100. Do you pay?
This problem is roughly isomorphic to the branch of Transparent Newcomb (version 1, version 2) where box B is empty, but it's simpler.
Here's a diagram:
My implementation of Omega isn't broken and doesn't fail. Could you show precisely where it fails? As far as I can see, all the conditions in Bongo's post still hold for it, therefore all possible logical implications of Bongo's post should hold for it too, and so should all possible "solutions".
It doesn't implement the counterfactual where depending on what response the agent assumes to give on observing a request to pay, it can agent-consistently conclude that Omega will either award or not award $1000. Even if we don't require that Omega is a decision-theoretic agent with known architecture, the decision problem must make the intended sense.
In more detail. Agent's decision is a strategy that specifies, for each possible observation (we have two: Omega rewards it, or Omega asks for money), a response. If Omega gives a reward, there is no respons... (read more)