AdamBell comments on Newcomb's Problem: A problem for Causal Decision Theories - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (120)
Thanks for a great post Adam, I'm looking forward to the rest of the series.
This might be missing the point, but I just can't get past it. How does a rational agent come to believe that the being they're facing is "an unquestionably honest, all knowing agent with perfect powers of prediction"?
I have the suspicion that a lot of the bizarreness of this problem comes out of transporting our agent into an epistemologically unattainable state.
Is there a way to phrase a problem of this type in a way that does not require such a state?
Newcomb's Problem still holds in much more realistic situations. So say someone who knows you really, really well comes up to you and makes the same offer. Imagine you don't mind taking their money and you reckon they know you well enough that they're 80% likely to be correct in their bet. One boxing is still the right decision because you have the following gain from one boxing:
(.8 x 1 000 000) + (.2 x 0) = 800 000
and for two boxing:
(.8 x 1000) + (.2 x 1 001 000) = 800+ 200 200 = 201 000
But Causal Decision Theory will still undertake the same reasoning because your decision still doesn't have a causal influence on whether the boxes are in state 1 or 2. So Causal Decision Theory will still two box.
So Newcomb's Problem still holds in more realistic situations.
Is that the sort of thing you were looking for or have I missed the point?