Take the £10, and don't bother opening the envelope. You are not (acausally) controlling whether £1'000'000 are in the envelope, but are controlling whether to take the £10, so you'll take the £10 (since you are money-maximizing), and if Omega is correct, the envelope is going to be empty.
The agents that refuse the £10 in this situation will only be visited by Omega when the envelope contains the £1'000'000, while the money-maximizing agents will only be visited by Omega when the envelope is empty. By your decision, you don't control whether the envelop contains money, but you do control whether Omega appears (since the statement asserted by Omega is about you). Thus, by deciding to take the money in this situation, you add expected £5 (or however often Omega appears) to your balance, by acausally summoning Omega.
By refusing the £10, you maximize the amount of money that the agents who see Omega get, by moving Omega around. It's similar to trying to become a lottery winner by selling to existing lottery winners the same dietary supplement you take, since this makes the takers of this dietary supplement more likely to be lottery winners.
I give formalization of this solution in another comment.
I'll disregard my earlier comment and assume the latter interpretation for now.
So here are the things that can (and can't) happen:
So, starting with Alpha's coin flip, here are the only possible paths:
This is really cool puzzle. By accepting the £10, you're in a conditional "Alpha never sent you the money", but by refusing you're in conditional "Alpha sent you the money". However, that choice doesn't actually affect Alpha sending or not sending you the money. This is unlike the Newcomb's problem, where you can truly choose, acausally, what the opaque box will contain.
I assume the problem is to be interpreted as Omega saying, "Either (1) (I have predicted you will refuse the $10, and there is $1000,000 in the envelope) xor (2) (I have predicted you will take the $10, and there is $0 in the envelope)", rather than asserting some sort of entanglement above and beyond this.
If so, I take the $10 and formulate the counterfactual, "If I were the sort of person who rejected the $10, Omega would have told me something else to begin with, like 'if you refuse the $10 then the envelope will be empty', but the digit of pi would have been the same".
As previously noted, though, I can't quite say how to compute this formally.
I think this problem would be clearer with a smaller ratio between the two payments. As it is the risk that you might have misunderstood the problem or made an unwarranted assumption dominates and you should not take the £10 just to be safe you aren't making a big mistake, even if you think that's a losing move.
This is formalization of the decision procedure corresponding to the informal solution I gave in another comment (obviously, it includes a lot of detail unnecessary for this problem, but for the purpose of demonstrating the method, details are not omitted):
Programs for the participants:
P - player
O - Omega deciding whether to make the offer
A - Alpha
Notation: [[X]] is the output of program X, X(Y) is a program that is composition of X and Y, where X expects program Y as argument. Thus, [[X(Y)]] is the output of X given argument Y, and X([[Y]]) is the output ...
Refuse the 10 pounds.
The assumptions that you'll move Omega around or otherwise alter Omega's pattern of behavior seems speculative. Maybe Omega's going fishing for a few hundred years. Maybe she's feeling frisky and generous. Maybe I got the problem wrong.
It appears there's some chance that I'm improving my chance at a million pounds by some amount. Those "somes" may not be high, but my problem-uncertainty makes it an easy call. I see no reason to expect a lower or higher number of Omega appearances based on my decision. To the extent this migh...
What Eliezer and Vladimir said (though if anyone's counting, I decided this before looking at the comments). My choice controls whether or not Omega made its prediction, not the contents of the envelope. (How would one express this using a world-program?)
I humbly request that future thought experiments not be done in £, since there is no "£" key on my keyboard.
I'd refuse the £10 unless I was extremely confident (>99.999%) that if I took the £10 I couldn't actually exist because the scenario as given was inconsistent and that the real me would end up with £10 more if this was true.
(i. e. the offer was independent of my decision but the prediction not, Omega would take exactly the same action regardless of whether the envelope was filled, the prediction would be false if I took the £10 in either case, and I would take the £10 in either case)
I don't understand the theory, but the one-boxing solution seems obvious: given that Omega is correct, if I am such that I would refuse the £10, I would not be offered the choice unless the £1 000 000 is in the envelope, therefore I should refuse the £10 ...
... unless I believe Omega is over u(£1 000 000)/u(£10) times more likely to offer the deal to agents who take the £10 than to agents who refuse. In that case, being willing to take the £10 is expected to pay off.
Edit (after timtyler's reply): Vladimir Nesov's analysis has caused me to reconsider - I would now take the £10.
Isn't this just a reformulation of Newcomb's problem ?
Mechanically, "Omega + alpha + the random generator" is equivalent to Newcomb's Omega.
[Edit: OK, it isn't :)]
Take the £10, my reasoning goes as follows: if I precommit to refuse it, either I get the £1,000,000 and refuse £10, or I get £0 and omega doesn't even show up; if I precommit to accept it, either I get the £1,000,000 and omega doesn't even show up, or I get £10 from omega showing up and me accepting (the respective expected utilities being £500,000 and £500,005). I do better by precommitting to take it, so to be reflectively consistent (and win), I must now take it.
I like this problem because it seems to operate on the same intuitions that lead to one-boxing and two-boxing for those who don't do any actual analysis, but the one-boxing intuition leads you astray (though not by much).
Personally, I'd take the £10 on reflection but would have refused the £10 based on my intuitions. I'm pretty sure Omega wouldn't be giving me £10, since if confronted with the situation I would be forced to think, "If I say 'no' now, there's lots of money in that envelope."
The answer is dependong on what Omega would have done if he had predicted that you will refuse the 10 iff there is nothing in Alpha's envelope. Two possibilities :
Omega1 would have brought you the envelope anyway, but said nothing else
Omega2 wouldn't have bothered to come, since there's no paradox involved.
When dealing with Omega1, take the £10, yay, free money ! (there wasn't anything in the envelope anyway, otherwise Omega wouldn't have visited you, the taker-of-free-money - see Vladimir's explanation)
The post as stated doesn't tell us which Omega...
I'll disregard my earlier comment and assume the latter interpretation for now.
So here are the things that can (and can't) happen:
Hmm. Some commentators appear to be assuming that you don't get to keep the contents of the envelope which Alpha sent you. The problem is not 100% clear on this issue - and it makes a difference to the answer!
I one-box on Newcomb's. I two-envelope on this. This situation, however, is absurd. [ETA: Now that I think about it more, I'm now inclined to one-envelope and also more irritated by the hidden assumptions in this whole hypothetical.]
Omega's prediction is bizarre, because there's no apparent way that the contents of the envelope are entangled with my decision to accept the money - whether I am the kind of person who two-boxes or one-boxes, the contents of the envelope were decided by a coin toss. It seems the only way for Omega to make a reliable prediction...
How are we to read Omega's statement?
if and only if there is £1000 000 in Alpha's envelope.
Or:
I predicted that <you will refuse this £10 if and only if there is £1000 000 in Alpha's envelope>.
The former interpretation leaves open the possibility that, if there is £1000 000 in the envelope, Omega made no prediction one way or the other.
I would translate this scenario into the following world-program:
U(S) =
{
envelopeIsFilled = coinflip()
acceptNote = S()
if (acceptNote == envelopeIsFilled)
CONTRADICTION
else
return (envelopeIsFilled ? 1e6 : 0) + (acceptNote ? 10 : 0)
}
Based on this world-program, it is obvious that you should refuse the note.
This is just Newcomb's problem with a coin flipped on how the boxes are labeled, so of course I onebox.
This is a variant built on Gary Drescher's xor problem for timeless decision theory.
You get an envelope from your good friend Alpha, and are about to open it, when Omega appears in a puff of logic.
Being completely trustworthy as usual (don't you just hate that?), he explains that Alpha flipped a coin (or looked at the parity of a sufficiently high digit of pi), to decide whether to put £1000 000 in your envelope, or put nothing.
He, Omega, knows what Alpha decided, has also predicted your own actions, and you know these facts. He hands you a £10 note and says:
"(I predicted that you will refuse this £10) if and only if (there is £1000 000 in Alpha's envelope)."
What to do?
EDIT: to clarify, Alpha will send you the envelope anyway, and Omega may choose to appear or not appear as he and his logic deem fit. Nor is Omega stating a mathematical theorem: that one can deduce from the first premise the truth of the second. He is using XNOR, but using 'if and only if' seems a more understandable formulation. You get to keep the envelope whatever happens, in case that wasn't clear.