wedrifid comments on The Presumptuous Philosopher's Presumptuous Friend - Less Wrong

3 Post author: PlaidX 05 October 2009 05:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (80)

You are viewing a single comment's thread. Show more comments above.

Comment author: taw 06 October 2009 10:42:38AM 0 points [-]

By trivial argument (of the kind employed in algorithm complexity analysis and cryptography) that you can just toss a coin or do mental equivalent of it, any guaranteed probability nontrivially >.5, even by a ridiculously small margin, is impossible to achieve. Probability against a random human is entirely irrelevant - what Omega must do is probability against the most uncooperative human being nontrivially >.5, as you can choose to be maximally uncooperative if you wish to.

If we force determinism (what is cheating already), disable free will (as in ability to freely choose our answer only at the point we have to), and let Omega see our brain, it basically means that we have to decide before Omega, and have to tell Omega what we decided, what reverses causality, and collapses it into "Choose 1 or 2 boxes. Based on your decision Omega chooses what to put in them".

From the linked Wikipedia article:

More recent work has reformulated the problem as a noncooperative game in which players set the conditional distributions in a Bayes net. It is straight-forward to prove that the two strategies for which boxes to choose make mutually inconsistent assumptions for the underlying Bayes net. Depending on which Bayes net one assumes, one can derive either strategy as optimal. In this there is no paradox, only unclear language that hides the fact that one is making two inconsistent assumptions.

Some argue that Newcomb's Problem is a paradox because it leads logically to self-contradiction. Reverse causation is defined into the problem and therefore logically there can be no free will. However, free will is also defined in the problem; otherwise the chooser is not really making a choice.

That's basically it. It's ill-defined, and any serious formalization collapses it into either "you choose first, so one box", or "Omega chooses first, so two box" trivial problems.

Comment author: wedrifid 06 October 2009 08:20:47PM 1 point [-]

Probability against a random human is entirely irrelevant - what Omega must do is probability against the most uncooperative human being nontrivially >.5, as you can choose to be maximally uncooperative if you wish to.

The limit of how uncooperative you can be is determined by how much information can be stored in the quarks from which you are constituted. Omega can model these. Your recourse of uncooperativity is for your entire brain to be balanced such that your choice depends on quantum uncertainty. Omega then treats you the same way he treats any other jackass who tries to randomize with a quantum coin.

Comment author: SilasBarta 06 October 2009 09:38:49PM -2 points [-]

Geez! When did flipping a (provably) fair coin when faced with a tough dilemma, start being the sole domain of jackasses?

Comment author: SilasBarta 06 October 2009 09:44:13PM -1 points [-]

Geez! When did questioning the evilness of flipping a fair coin when faced with a tough dilemma, start being a good reason to mod someone down? :-P

Comment author: wedrifid 06 October 2009 09:56:26PM *  0 points [-]

Don't know. I was planning to just make a jibe at your exclusivity logic (some jackasses do therefore all who do...).

Make that two jibes. Perhaps the votes were actually a cringe response at the comma use. ;)

Comment author: SilasBarta 06 October 2009 10:00:29PM 0 points [-]

Well, you did kinda insinuate that flipping a coin makes you a jackass, which is kind of an extreme reaction to an unconventional approach to Newcomb's problem :-P

Comment author: wedrifid 06 October 2009 10:25:53PM 0 points [-]

;) I'd make for a rather harsh Omega. If I was dropping my demi-divine goodies around I'd make it quite clear that if I predicted a randomization I'd booby trap the big box with a custard pie jack-in-a-box trap.

If I was somewhat more patient I'd just apply the natural extension, making the big box reward linearly dependent on the probabilities predicted. Then they can plot a graph of how much money they are wasting per probability they assign to making the stupid choice.

Comment author: SilasBarta 06 October 2009 10:40:41PM 0 points [-]

I'd make for a rather harsh Omega. If I was dropping my demi-divine goodies around I'd make it quite clear that if I predicted a randomization I'd booby trap the big box with a custard pie jack-in-a-box trap.

Wow, they sure are right about that "power corrupts" thing ;-)

Comment author: wedrifid 06 October 2009 11:38:40PM 0 points [-]

Power corrupts. Absolute power corrupts... comically?