This is a variant built on Gary Drescher's xor problem for timeless decision theory.
You get an envelope from your good friend Alpha, and are about to open it, when Omega appears in a puff of logic.
Being completely trustworthy as usual (don't you just hate that?), he explains that Alpha flipped a coin (or looked at the parity of a sufficiently high digit of pi), to decide whether to put £1000 000 in your envelope, or put nothing.
He, Omega, knows what Alpha decided, has also predicted your own actions, and you know these facts. He hands you a £10 note and says:
"(I predicted that you will refuse this £10) if and only if (there is £1000 000 in Alpha's envelope)."
What to do?
EDIT: to clarify, Alpha will send you the envelope anyway, and Omega may choose to appear or not appear as he and his logic deem fit. Nor is Omega stating a mathematical theorem: that one can deduce from the first premise the truth of the second. He is using XNOR, but using 'if and only if' seems a more understandable formulation. You get to keep the envelope whatever happens, in case that wasn't clear.
Heh. I first read "1e6" above as a function determining whether the user-agent is internet explorer 6.
What is "CONTRADICTION" supposed to do in this "program"?
This program must be embedded in a larger one, since the original problem description didn't say what Omega would do if it couldn't truthfully make the prediction it did. Call that larger program U2(S). The only thing we are told about U2 is that it only calls U if it can do so in a way which guarantees that U won't reach a contradiction. Suppose, for example, that if Omega's prediction couldn't be made truthfully then you wouldn't get any money at all. This corresponds to the world program:
... (read more)