shokwave comments on Can anyone explain to me why CDT two-boxes? - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (136)
Perfect sense. Theorising that CDT would lose because it's playing a different game is uninteresting as a thought experiment; if I theorise that any decision theory is playing a different game it will also lose; this is not a property of CDT but of the hypothetical.
Let's turn to the case of playing in reality, as it's the interesting one.
If you grant that Newcomb paradoxes might exist in reality, then there is a real problem: CDT can't distinguish between free money boxes and Newcomb paradoxes, so so when it encounters a Newcomb situation it underperforms.
If you claim Newcomb cannot exist in reality, then this is not a problem with CDT. I (and hopefully others, though I shan't speak for them) would accept that this is not a problem with CDT if it is shown that Newcomb's is not possible in real life - but we are arguing against you here because we think Newcomb is possible. (Okay, I did speak for them).
I disagree on two points: one, I think a simulator is possible (that is, Omega 's impossibility comes from other powers we've given it, we can remove those powers and weaken Omega to a fits-in-reality definition without losing prediction), and two, I don't think the priors-and-payoffs approach to an empirical predictor is correct (for game-theoretic reasons which I can explicate if you'd like, but if it's not the point of contention it would only distract).
No, CDT can in fact distinguish very well. It always concludes that the money is there, and it is always right, because it never encounters Newcomb.
To clarify: You are talking about actual Newcomb with an omniscient being, yes? Because in that case, I think several posters have already stated they deem this impossible, and Nozick agrees.
If you're talking about empirical Newcomb, that certainly is possible, but it is impossible to do better than CDT without choosing differently in other situations, because if you've acted like CDT in the past, Omega is going to assume you are CDT, even if you're not.
I agree on the "we can remove those powers and weaken Omega to a fits-in-reality definition without losing prediction" part, but this will change what the "correct" answer is. For example, you could substitute Omega with a coin toss and repeat the game if Omega is wrong. This is still a one-time problem, because Omega is a coin and therefore has no memory, but CDT, which would two-box in empirical Newcomb, one-boxes in this case and takes the $1,000,000.
I don't think this is the point of contention, but after we've settled that, I would be interested in hearing your line of thought on this.
How about the version where agents are computer programs, and Omega runs a simulation of the agent facing the choice, observes it's behavior, and fills the boxes accordingly?
I see no violation of causality in that version.