This post was inspired by taw urging us to mathematize Newcomb's problem and Eliezer telling me to post stuff I like instead of complaining.
To make Newcomb's problem more concrete we need a workable model of Omega. Let me count the ways:
1) Omega reads your decision from the future using a time loop. In this case the contents of the boxes are directly causally determined by your actions via the loop, and it's logical to one-box.
2) Omega simulates your decision algorithm. In this case the decision algorithm has indexical uncertainty on whether it's being run inside Omega or in the real world, and it's logical to one-box thus making Omega give the "real you" the million.
3) Omega "scans your brain and predicts your decision" without simulating you: calculates the FFT of your brainwaves or whatever. In this case you can intend to build an identical scanner, use it on yourself to determine what Omega predicted, and then do what you please. Hilarity ensues.
(NB: if Omega prohibits agents from using mechanical aids for self-introspection, this is in effect a restriction on how rational you're allowed to be. If so, all bets are off - this wasn't the deal.)
(Another NB: this case is distinct from 2 because it requires Omega, and thus your own scanner too, to terminate without simulating everything. A simulator Omega would go into infinite recursion if treated like this.)
4) Same as 3, but the universe only has room for one Omega, e.g. the God Almighty. Then ipso facto it cannot ever be modelled mathematically, and let's talk no more.
I guess this one is settled, folks. Any questions?
That's a creative attempt to avoid really considering Newcomb's problem; but as I suggested earlier, the noisy real-world applications are real enough to make this a question worth confronting on its own terms.
Least Convenient Possible World: Omega is type (3), and does not offer the game at all if it calculates that its answers turn out to be contradictions (as in your example above). At any rate, you're not capable of building or obtaining an accurate Omega' for your private use.
Aside: If Omega sees probability p that you one-box, it puts the million dollars in with probability p, and in either case writes p on a slip of paper in that box. Omega has been shown to be extremely well-calibrated, and its p only differs substantially from 0 or 1 in the case of the jokers who've tried using a random process to outwit it. (I always thought this would be an elegant solution to that problem; and note that the expected value of 1-boxing with probability p should then be 1000000p+1000(1-p).)
Yes, these are extra rules of the game. But if these restrictions make rationality impossible, then it doesn't seem human beings can be rational by your standards (as we're already being modeled fairly often in social life)— in which case, we'll take whatever Art is our best hope instead, and call that rationality.
So what do you do in this situation?
Eliezer has repeatedly stated in discussions of NP that Omega only cares about the outcome, not any particular "ritual of cognition". This is an essential part of the puzzle because once you start punishing agents for their reasoning you might as well go all the way: reward only irrational agents and say nyah nyah puny rationalists. Your Omega bounds how rational I can be and outright forbids thinking certain thoughts. In other words, the original raison d'etre was refining the notion of perfect rationality, whereas your formulation is about appr... (read more)