Jonathan_Graehl comments on The Presumptuous Philosopher's Presumptuous Friend - Less Wrong

3 Post author: PlaidX 05 October 2009 05:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (80)

You are viewing a single comment's thread. Show more comments above.

Comment author: taw 06 October 2009 10:42:38AM 0 points [-]

By trivial argument (of the kind employed in algorithm complexity analysis and cryptography) that you can just toss a coin or do mental equivalent of it, any guaranteed probability nontrivially >.5, even by a ridiculously small margin, is impossible to achieve. Probability against a random human is entirely irrelevant - what Omega must do is probability against the most uncooperative human being nontrivially >.5, as you can choose to be maximally uncooperative if you wish to.

If we force determinism (what is cheating already), disable free will (as in ability to freely choose our answer only at the point we have to), and let Omega see our brain, it basically means that we have to decide before Omega, and have to tell Omega what we decided, what reverses causality, and collapses it into "Choose 1 or 2 boxes. Based on your decision Omega chooses what to put in them".

From the linked Wikipedia article:

More recent work has reformulated the problem as a noncooperative game in which players set the conditional distributions in a Bayes net. It is straight-forward to prove that the two strategies for which boxes to choose make mutually inconsistent assumptions for the underlying Bayes net. Depending on which Bayes net one assumes, one can derive either strategy as optimal. In this there is no paradox, only unclear language that hides the fact that one is making two inconsistent assumptions.

Some argue that Newcomb's Problem is a paradox because it leads logically to self-contradiction. Reverse causation is defined into the problem and therefore logically there can be no free will. However, free will is also defined in the problem; otherwise the chooser is not really making a choice.

That's basically it. It's ill-defined, and any serious formalization collapses it into either "you choose first, so one box", or "Omega chooses first, so two box" trivial problems.

Comment author: Jonathan_Graehl 06 October 2009 05:59:48PM 0 points [-]

Thanks for the explanation. I think that if the right decision is always to 2-box (which it is if Omega is wrong 1/2-epsilon of the time), then all Omega has to do is flip a biased coin, and choose for the more likely alternative to believe that I 2-box. But I guess you disagree.

There's a true problem if you require Omega to make a deterministic decision; it's probably impossible to even postulate he's right with some specific probability. Maybe that's what you were getting at.