This post was inspired by taw urging us to mathematize Newcomb's problem and Eliezer telling me to post stuff I like instead of complaining.
To make Newcomb's problem more concrete we need a workable model of Omega. Let me count the ways:
1) Omega reads your decision from the future using a time loop. In this case the contents of the boxes are directly causally determined by your actions via the loop, and it's logical to one-box.
2) Omega simulates your decision algorithm. In this case the decision algorithm has indexical uncertainty on whether it's being run inside Omega or in the real world, and it's logical to one-box thus making Omega give the "real you" the million.
3) Omega "scans your brain and predicts your decision" without simulating you: calculates the FFT of your brainwaves or whatever. In this case you can intend to build an identical scanner, use it on yourself to determine what Omega predicted, and then do what you please. Hilarity ensues.
(NB: if Omega prohibits agents from using mechanical aids for self-introspection, this is in effect a restriction on how rational you're allowed to be. If so, all bets are off - this wasn't the deal.)
(Another NB: this case is distinct from 2 because it requires Omega, and thus your own scanner too, to terminate without simulating everything. A simulator Omega would go into infinite recursion if treated like this.)
4) Same as 3, but the universe only has room for one Omega, e.g. the God Almighty. Then ipso facto it cannot ever be modelled mathematically, and let's talk no more.
I guess this one is settled, folks. Any questions?
Maybe see it as a competition of wits. Between two agents whose personal goal is or isn't compatible. If they are not of similar capability, the one with more computational resources, and how well those resources are being used, is the one which will get its way, against the other's will if necessary. If you were "bigger" than omega, then you'd be the one to win, no matter which weird rules omega would wish to use. But omega is bigger ... by definition.
In this case, the only way for the smaller agent to succeeds is to embed his own goals into the other agent's. In practice agents aren't omniscient or omnipotent, so even an agent orders of magnitude more powerful than another, may still fail against the latter. That would become increasingly unlikely, but not totally impossible (as in, playing lotteries).
If the difference in power is even small enough, then both agents ought to cooperate and compromise, both, since in most cases that's how they can maximize their gains.
But in the end, once again, rationality is about reliably winning in as many cases as possible. In some cases, however unlikely and unnatural they may seem, it just can't be achieved. That's what optimization processes, and how powerful they are, are about. They steer the universe into very unlikely states. Including states where "rationality" is counterproductive.
Yes! Where is the money? A battle of wits has begun! It ends when a box is opened.
Of course, it's so simple. All I have to do is divine from what I know of Omega: is it the sort of agent who would put the money in one box, or both? Now, a clever agent would put little money into only one box, because it would know that only a great fool would not reach for both. I am not a great fool, so I can clearly not take only one box. But Omega must have known I was not a great fool, and would have counted on it, so I can cle... (read more)