You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

JQuinton comments on Open Thread, October 13 - 19, 2013 - Less Wrong Discussion

4 Post author: Coscott 14 October 2013 01:57AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (247)

You are viewing a single comment's thread.

Comment author: JQuinton 15 October 2013 04:03:07PM *  1 point [-]

I've got a few questions about Newcomb's Paradox. I don't know if this has already been discussed somewhere on LW or beyond (granted, I haven't looked as intensely as I probably should have) but here goes:

If I were approached by Omega and he offered me this deal and then flew away, I would be skeptical of his ability to predict my actions. Is the reason that these other five people two-boxed and got $1,000 due to Omega accurately predicting their actions? Or is there some other explanation… like Omega not being a supersmart being and he never puts $1 million in the second box? If I had some evidence that people actually have one-boxed and gotten the $1 million then I would put more weight on the idea that he actually has $1 million to spare, and more weight on the possibility that Omega is a good/perfect predictor.

If I attempt some sort of Bayesian update on this information (the five previous people two-boxed and got $1,000) these two explanations seem to equally explain this fact. The probability of Omega putting the $1,000 in the previous five peoples' boxes given that he's a perfect predictor seems to be observationally equivalent to the probability that Omega doesn't ever put $1 million in the second box.

Then again, if Omega actually knew my reasoning process, he might actually provide me with the evidence that would make me choose to one-box over two-box.

It also seems to me that if my subjective confidence in Omega's abilities of prediction are over 51%, then it makes more sense to one-box than two-box... if my math/intuition about this is correct. Let's say my confidence in Omega's abilities of prediction are at 50%. If I two-box, there are two possible outcomes: I either get only $1,000 or I get $1,001,000. Both outcomes have a 50% chance of happening due to my subjective prior, so my decision theory algorithm is 50% * $1,000 + 50% * $1,001,000. This sums to a total utility/cash of $501,000.

If I one-box, there are also two possible outcomes: I either get $1,000,000 or I lose $1,000. Both outcomes, again, have a 50% chance of happening due to my subjective probability about Omega’s powers of prediction, so my decision theory algorithm is 50% * $1,000,000 + 50% * -$1,000. This sums to $499,000 in total utility.

Does that seem correct, or is my math/utility off somewhere?

Lastly, has something like Newcomb's Paradox been attempted in real life? Say with five actors and one unsuspecting mark?