ata comments on You're in Newcomb's Box - Less Wrong

36 05 February 2011 08:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Sort By: Best

Comment author: 06 February 2011 01:38:05AM *  2 points [-]

I think Newcomb's problem would be more interesting if the 1st box contained 1/2 million and the 2nd box contained 1 million, and omega was only right, say 75% of the time... See how fast answers start changing. What if omega thought you were a dirty two-boxer and only put money in box b? Then you would be screwed if you one-boxed! Try telling your wife that you made the correct 'timeless decision theoretical' answer when you come home with nothing.

You can't change the form of the problem like that and expect the same answer to apply! If, when you two-box, Omega has a 25% chance of misidentifying you as a one-boxer, and vice versa, then you can use that in a normal expected utility calculation.

If you one-box, you have a 75% chance of getting \$1 million, 25% nothing; if you two-box, 75% \$.5 million, 25% \$1.5 million. With linear utility over money, one-boxing and two-boxing are equivalent (expected value: \$750,000), and given even a slightly risk-averse dollars->utils mapping, two-boxing is the better deal. (I don't think TDT disagrees with that reasoning...)

Comment author: 06 February 2011 03:42:19AM 1 point [-]

That's kind of my point-- it is a utility calculation, not some mystical er-problem. TDT-type problems occur all the time in real life, but they tend not to involve 'perfect' predictors, but rather other flawed agents. The decision to cooperate or not cooperate is thus dependent on the calculated utility of doing so.

Comment author: 07 February 2011 01:02:03AM 0 points [-]

Right, I was mainly responding to the implication that TDT would be to blame for that wrong answer.