cousin_it comments on Newcomb's Problem: A problem for Causal Decision Theories - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (120)
Thanks for a great post Adam, I'm looking forward to the rest of the series.
This might be missing the point, but I just can't get past it. How does a rational agent come to believe that the being they're facing is "an unquestionably honest, all knowing agent with perfect powers of prediction"?
I have the suspicion that a lot of the bizarreness of this problem comes out of transporting our agent into an epistemologically unattainable state.
Is there a way to phrase a problem of this type in a way that does not require such a state?
Let's make things clearer by asking the meta-question: is the predictor's implementation, and the process by which we learn of it, relevant to the problem? Let's unpack "relevant": should the answer to Newcomb's Problem depend on these extraneous details about the predictor? And let's unpack "should": if decision theory A tells you to one-box in approximately-Newcomb-like scenarios without requiring further information, and decision theory B says the problem is "underspecified" and the answer is "unstable" and you can't pin it down without learning more about the real-world situation... which decision theory do you like more?
Decision theory A is by far preferable to me.
Of course, that's assuming that by newcomb-like scenarios you only include those were one-boxing is actually statistically correlated with greater wealth once all other factors are canceled out.
If Decision Theory A's definition of newcomb-like included a scenario where the person was doing well enough to make one-boxing appear to be the winning move, but was actually basing her decisions on hair-colour, then I would be more tempted by Decision Theory B.
IOW: whichever one wins for me :p