wedrifid comments on The Presumptuous Philosopher's Presumptuous Friend - Less Wrong

3 Post author: PlaidX 05 October 2009 05:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (80)

You are viewing a single comment's thread. Show more comments above.

Comment author: taw 06 October 2009 10:42:38AM 0 points [-]

By trivial argument (of the kind employed in algorithm complexity analysis and cryptography) that you can just toss a coin or do mental equivalent of it, any guaranteed probability nontrivially >.5, even by a ridiculously small margin, is impossible to achieve. Probability against a random human is entirely irrelevant - what Omega must do is probability against the most uncooperative human being nontrivially >.5, as you can choose to be maximally uncooperative if you wish to.

If we force determinism (what is cheating already), disable free will (as in ability to freely choose our answer only at the point we have to), and let Omega see our brain, it basically means that we have to decide before Omega, and have to tell Omega what we decided, what reverses causality, and collapses it into "Choose 1 or 2 boxes. Based on your decision Omega chooses what to put in them".

From the linked Wikipedia article:

More recent work has reformulated the problem as a noncooperative game in which players set the conditional distributions in a Bayes net. It is straight-forward to prove that the two strategies for which boxes to choose make mutually inconsistent assumptions for the underlying Bayes net. Depending on which Bayes net one assumes, one can derive either strategy as optimal. In this there is no paradox, only unclear language that hides the fact that one is making two inconsistent assumptions.

Some argue that Newcomb's Problem is a paradox because it leads logically to self-contradiction. Reverse causation is defined into the problem and therefore logically there can be no free will. However, free will is also defined in the problem; otherwise the chooser is not really making a choice.

That's basically it. It's ill-defined, and any serious formalization collapses it into either "you choose first, so one box", or "Omega chooses first, so two box" trivial problems.

Comment author: wedrifid 06 October 2009 08:06:58PM 0 points [-]

"Omega chooses first, so two box" trivial problems.

Yes. Omega chooses first. That's Newcomb's. The other one isn't.

It seems that the fact that both my decision and Omega's decision are determined (quantum acknowledged) by the earlier state of the universe utterly bamboozles your decision theory. Since that is in fact how this universe works your decision theory is broken. It is foolish to define a problem as 'ill-defined' simply because your decision theory can't handle it.

The current state of my brain influences both the decisions I will make in the future and the decisions other agents can make based on what they can infer of my from their observations. This means that intelligent agents will be able to predict my decisions better than a coin flip. In the case of superintelligences they can get a lot better than than 0.5.

Just how much money does Omega need to put in the box before you are willing to discard 'Serious' and take the cash?