Unknowns comments on Two-boxing, smoking and chewing gum in Medical Newcomb problems - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (93)
Your "Newcomb-like" problem isn't. In the original Newcomb problem there is no situation where both boxes contain a reward, yet the naive CDT makes you act as though there were. In your setup there is such a possibility, so 2-boxing is the strictly better strategy. Any decision theory better make you 2-box.
EDIT: Thanks to those who pointed out my brain fart. Of course both boxes contain a reward in the one boxing case. It just doesn't help you any. I maintain that this is not a Newcomb-like problem, since here 2-boxing is a strictly better strategy. No one would one-box if they can help it.
Re: the edit. Two boxing is strictly better from a causal decision theorist point of view, but that is the same here and in Newcomb.
But from a sensible point of view, rather than the causal theorist point of view, one boxing is better, because you get the million, both here and in the original Newcomb, just as in the AI case I posted in another comment.