Well... for whatever it's worth, the case I assume is (3).
"Rice's Theorem" prohibits Omega from doing this with all possible computations, but not with humans. It's probably not even all that difficult: people seem strongly attached to their opinions about Newcomb's Problem, so their actual move might not be too difficult to predict. Any mind that has an understandable reason for the move it finally makes, is not all that difficult to simulate at a high-level; you are doing it every time you imagine what it would do!
Omega is assumed to be in a superior position, but doesn't really need to be. I mean, I have no trouble imagining Omega as described - Omega figures out the decision I come to, then acts accordingly. Until I actually come to a decision, I don't know what Omega has already done - but of course my decision is simple: I take only box B. End of scenario.
If you're trying to figure out what Omega will do first - well, you're just doing that so that you can take both boxes, right? You just want to figure out what Omega does "first", and then take both boxes anyway. So Omega knows that, regardless of how much you insist that you want to compute Omega...
Aren't these rather ducking the point? The situations all seem to be assuming that we ourselves have Omega-level information and resources, in which case why do we care about the money anyway? I'd say the relevant cases are:
3b) Omega uses a scanner, but we don't know how the scanner works (or we'd be Omega-level entities ourselves).
5) Omega is using one of the above methods, or one we haven't thought of, but we don't know which. For all we know he could be reading the answers we gave on this blog post, and is just really good at guessing who will stic...
This is a good post. It explains that "given any concrete implementation of Omega, the paradox utterly disappears."
(5) Omega uses ordinary conjuring, or heretofore-unknown powers to put the million in the box after you make your decision. Solution: one-box for sure, no decision theory trickery needed. This would be in practice the conclusion we would come to if we encountered a being that appeared to behave like Omega, and therefore is also the answer in any scenario where we don't know the true implementation of Omega (ie any real scenario).
If the boxes are transparent, resolve to one-box iff the big box is empty.
All right, I found another nice illustration. Some philosophers today think that Newcomb's problem is a model of certain real-world situations. Here's a typical specimen of this idiocy, retyped verbatim from here:
Let me describe a typical medical Newcomb problem. It has long been recognized that in people susceptible to migraine, the onset of an attack tends to follow the consumption of certain foods, including chocolate and red wine. It has usually been assumed that these foods are causal factors, in some way triggering attacks. This belief has been the s...
I'm quite bothered by Eliezer's lack of input to this thread. To me this seems like the most valuable thread of Newcomb's we had on OB/LW, and he's the biggest fan of the problem here, so I would have guessed he thought about it a lot, and tried some models even if they failed. Yet he didn't write anything here. Why is it so?
That's a creative attempt to avoid really considering Newcomb's problem; but as I suggested earlier, the noisy real-world applications are real enough to make this a question worth confronting on its own terms.
Least Convenient Possible World: Omega is type (3), and does not offer the game at all if it calculates that its answers turn out to be contradictions (as in your example above). At any rate, you're not capable of building or obtaining an accurate Omega' for your private use.
Aside: If Omega sees probability p that you one-box, it puts the million dol...
Maybe see it as a competition of wits.
Yes! Where is the money? A battle of wits has begun! It ends when a box is opened.
Of course, it's so simple. All I have to do is divine from what I know of Omega: is it the sort of agent who would put the money in one box, or both? Now, a clever agent would put little money into only one box, because it would know that only a great fool would not reach for both. I am not a great fool, so I can clearly not take only one box. But Omega must have known I was not a great fool, and would have counted on it, so I can clearly not choose both boxes.
Truly, Omega must admit that I have a dizzying intellect.
On the other hand, perhaps I have confused this with something else.
I find Newcomb's problem interesting. Omega predicts accurately. This is impossible in my experience. We are not discussing a problem any of us is likely to face. However I still find discussing counter-factuals interesting.
To make Newcomb's problem more concrete we need a workable model of Omega
I do not think that is the case. Whether Omega predicts by time travel, mind-reading, or even removes money from the box by teleportation when it observes the subject taking two boxes is a separate discussion, considering laws of physics, SF, whatever. This mi...
In the standard Newcomb's, is the deal Omega is making explained to you before Omega makes its decision; and does the answer to my question matter?
NB: if Omega prohibits agents from using mechanical aids for self-introspection, this is in effect a restriction on how rational you're allowed to be. If so, all bets are off - this wasn't the deal.
Thank you. Hopefully this will be the last post about Newcomb's problem for a long time.
Even disregarding uncertainty whether you're running inside Omega or in the real world, assuming Omega is perfect #2 effectively reverses the order of decisions just like #1 - and you decide first (via simulation), omega decides second. So it collapses to a trivial one-box.
Omega simulates your decision algorithm. In this case the decision algorithm has indexical uncertainty on whether it's being run inside Omega or in the real world, and it's logical to one-box thus making Omega give the "real you" the million.
I never thought of that!
Can you formalize "hilarity ensues" a bit more precisely?
Omega knows that I have no patience for logical paradoxes, and will delegate my decision to a quantum coin-flipper exploiting the Conway-Kochen theorem. Hilarity ensues.
I would one-box in Newcomb's problem, but I'm not sure why Omega is more plausible than a being that rewards people that it predicts would be two-boxers. And yet it is more plausible to me.
When I associate one-boxing with cooperation, that makes it more attractive. The anti-Omega would be someone who was afraid cooperators would conspire against it, and so it rewards the opposite.
In the case of the pre-migraine state below, refraining from chocolate seems much less compelling.
4) Same as 3, but the universe only has room for one Omega, e.g. the God Almighty. Then ipso facto it cannot ever be modelled mathematically, and let's talk no more.
Why can't God Almighty be modelled mathematically?
Omega/God is running the universe on his computer. He can pause any time he wants (for example to run some calculations), and modify the "universe state" to communicate (or just put his boxes in).
That seems to be close enough to 4). Unlike with 3), you can't use the same process as Omega (pause the universe and run arbitrary calculations that could consider the state of every quark).
What does Newcomb's Problem has to do with reality as we know it anyway? I mean, imagine that I've solved it (whatever that means). Where in my everyday life can I apply it?
I have a very strong feeling that way 3 is not possible. It seems that any scanning/analysis procedure detailed enough to predict your actions constitutes simulating you.
Part of my motivation for digging deep on this issue is that, although I did not intend for my description of Omega and the detector reasoning about each other to be based on a simulation, I could see after you brought it up that it might be interpreted that way. I thought if I knew on a more detailed level what we mean by "simulation", I would be able to tell if I had implicitly assumed that Omega was using one. However, any strategy I come up with for making predictions seems like something I could consider a simulation, though it might lack detail, and through omitting important details, be inaccurate. Even just guessing could be considered a very undetailed, very inaccurate simulation.
I would like a definition of simulation that doesn't lead to this conclusion, but in case there isn't one, suppose the restriction against simulation really means that Omega does not use a perfect simulation, and you have a chance to resolve the indexical uncertainty.
I can imagine situations in which an incomplete, though still highly accurate, simulation provides information to the simulated subject to resolve the indexical uncertainty, but this information is difficult or even impossible to interpret.
For example, suppose Omega does use a perfect simulation, except that he flips a coin. In the real world, Omega shows you the true result of the coin toss, but he simulates your response as if he shows you the opposite result. Now you still don't know if you are in a simulation or reality, but you are no longer guaranteed by determinism to make the same decision in each case. You could one box if you see heads and two box if you see tails. If you did this, you have a 50% probability that the true flip was heads, so you gain nothing, and a 50% probability that the true flip was tails and you gain $1,001,000, for an expected gain of $500,500. This is not as good as if you just one box either way and gain $1,000,000. If Omega instead flips a biased coin that shows tails 60% of the time, and tells you this, then the same strategy has an expected gain of $600,600, still not as good as complete one-boxing. But if the coin was biased to show tails 1000 times out of 1001, then the strategy expects to equal one-boxing, and it will do better for a more extreme bias.
So, if you suppose that Omega uses an imperfect simulation (without the coin), you can gather evidence about if you are in reality or the simulation. You would need to achieve a probability of greater than 1000/1001 that you are in reality before it is a good strategy to two box. I would be impressed with a strategy that could accomplish that.
As for terminating, if Omega detects a paradox, Omega puts money in box 1 with 50% probability. It is not a winning strategy to force this outcome.
It seems your probabilistic simulator Omega is amenable to rational analysis just like my case 2. In good implementations we can't cheat, in bad ones we can; it all sounds quite normal and reassuring, no trace of a paradox. Just what I aimed for.
As for terminating, we need to demystify what it means by "detecting a paradox". Does it somehow compute the actual probabilities of me choosing one or two boxes? Then what part of the world is assumed to be "random" and what part is evaluated exactly? An answer to this question might clear things up.
This post was inspired by taw urging us to mathematize Newcomb's problem and Eliezer telling me to post stuff I like instead of complaining.
To make Newcomb's problem more concrete we need a workable model of Omega. Let me count the ways:
1) Omega reads your decision from the future using a time loop. In this case the contents of the boxes are directly causally determined by your actions via the loop, and it's logical to one-box.
2) Omega simulates your decision algorithm. In this case the decision algorithm has indexical uncertainty on whether it's being run inside Omega or in the real world, and it's logical to one-box thus making Omega give the "real you" the million.
3) Omega "scans your brain and predicts your decision" without simulating you: calculates the FFT of your brainwaves or whatever. In this case you can intend to build an identical scanner, use it on yourself to determine what Omega predicted, and then do what you please. Hilarity ensues.
(NB: if Omega prohibits agents from using mechanical aids for self-introspection, this is in effect a restriction on how rational you're allowed to be. If so, all bets are off - this wasn't the deal.)
(Another NB: this case is distinct from 2 because it requires Omega, and thus your own scanner too, to terminate without simulating everything. A simulator Omega would go into infinite recursion if treated like this.)
4) Same as 3, but the universe only has room for one Omega, e.g. the God Almighty. Then ipso facto it cannot ever be modelled mathematically, and let's talk no more.
I guess this one is settled, folks. Any questions?