ata comments on You're in Newcomb's Box - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (172)
You can't change the form of the problem like that and expect the same answer to apply! If, when you two-box, Omega has a 25% chance of misidentifying you as a one-boxer, and vice versa, then you can use that in a normal expected utility calculation.
If you one-box, you have a 75% chance of getting $1 million, 25% nothing; if you two-box, 75% $.5 million, 25% $1.5 million. With linear utility over money, one-boxing and two-boxing are equivalent (expected value: $750,000), and given even a slightly risk-averse dollars->utils mapping, two-boxing is the better deal. (I don't think TDT disagrees with that reasoning...)
That's kind of my point-- it is a utility calculation, not some mystical er-problem. TDT-type problems occur all the time in real life, but they tend not to involve 'perfect' predictors, but rather other flawed agents. The decision to cooperate or not cooperate is thus dependent on the calculated utility of doing so.
Right, I was mainly responding to the implication that TDT would be to blame for that wrong answer.