Newcomb's Problem is silly. It's only controversial because it's dressed up in wooey vagueness. In the end it's just a simple probability question and I'm surprised it's even taken seriously here. To see why, keep your eyes on the bolded text:
The problem is, such emphatic declarations of confidence in the right answer can just as easily be followed by one-boxing, two-boxing, or declaring the hypotheses self-contradictory. That is, in fact, what makes it a Problem, even if, to any individual, it is not a problem.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
To ask that question is already to presuppose the one-boxing answer, and to miss the problem that the problem itself may be problematic. I don't take simple two-boxing any more seriously than Amanojack does, but the third possibility, of disputing that the problem is well-posed, is worth exploring. On LW, self-professed two-boxers are usually taking that alternative. (Elsewhere, I see two-boxing philosophers actually saying that two-boxing loses, but is still the rational thing to do.)
The problem is best disputed not by simply asserting, as some have, that no such Omega can exist, but by thinking in detail about what it would take for someone to predict the decisions of a decision-maker who knows you're trying to predict their decisions. What that sort of thinking looks like is this. That paper is about Prisoners Dilemma, but similar investigations could be made of Newcomb, Parfit's Hitchhiker, etc.
That is what fighting the hypothesis looks like, done right.
That is going for the third option and dodging to point out exactly why the problem should not be well posed. I can write a program working as the Newcomb's problem is described if I go for the "unperfect predictor" version where the being is merely right "most of the time". A way to do it could be to let player run a number of practice (or calibration) games, then at a time chosen by the guesser make that game "real". The calibration plays would simulate the supernatural player minute observation of the player behavior, what can indeed not easily be done.
I knew of the Robust Coopearation paper, and it's really very interresting, but getting the source code of the other is also a huge change to the initial problem. At least it excludes perfect oracles from the problem, it is also clear you may be confronted to halting problem (this is why current scheme tournament based on this idea had to make a provision in rules to avoid non halting programs). Stating we can say something usefull on another problem does not implies the initial one had anything wrong.
On the other hand, it is obvious that Dominance Argument is broken in Newcom's problem (and also in PD) as the logical proof is only correct when we have non correlated variables (non correlation should not be confused with causal independance, causal independance is not enough for Dominance Argument to be correct). In Newcomb's problem, the perfect correlation is part of the problem statement. How anyone could then apply Dominance Argument is beyond me, probably because it mimics usual deductive logic.
I'm not saying that Newcomb's problem describe any physically possible event, or not even that it is a good problem, or that the consequences it leads to are agreeable (at first sight it leads to lack of free will), but just that mathematically using (very) simple probabilistic tools you can solve it, without changing anything and that alternative usual solution is based on a mathematical error.